Query Performance and reading an Explain Plan

Hi,
Below I have posted a query that is running slowly for me - upwards of 10 minutes which I would not expect. I have also supplied the explain plan. I'm fairly new to explain plans and not sure what the danger signs are that I should be looking out for.
I have added indexes to these tables, a lot of which are used in the JOIN and so I expected this to be quicker.
Any help or pointers in the right direction would be very much appreciated -
SELECT a.lot_id, a.route, a.route_rev
FROM wlos_owner.tbl_current_lot_status_dim a, wlos_owner.tbl_last_seq_num b, wlos_owner.tbl_hist_metrics_at_op_lkp c
WHERE a.fw_ver = '2'
AND a.route = b.route
AND a.route_rev = b.route_rev
AND a.fw_ver = b.fw_ver
AND a.route = c.route
AND a.route_rev = c.route_rev
AND a.fw_ver = c.fw_ver
AND a.prod = c.prod
AND a.lot_type = c.lot_type
AND c.step_seq_num >= a.step_seq_num
PLAN_TABLE_OUTPUT
Plan hash value: 2447083104
| Id  | Operation           | Name                       | Rows  | Bytes | Cost
(%CPU)| Time     |
PLAN_TABLE_OUTPUT
|   0 | SELECT STATEMENT    |                            |   333 | 33633 |  1347
   (8)| 00:00:17 |
|*  1 |  HASH JOIN          |                            |   333 | 33633 |  1347
   (8)| 00:00:17 |
|*  2 |   HASH JOIN         |                            |   561 | 46002 |  1333
   (7)| 00:00:17 |
|*  3 |    TABLE ACCESS FULL| TBL_CURRENT_LOT_STATUS_DIM | 11782 |   517K|   203
   (5)| 00:00:03 |
PLAN_TABLE_OUTPUT
|*  4 |    TABLE ACCESS FULL| TBL_HIST_METRICS_AT_OP_LKP |   178K|  6455K|  1120
   (7)| 00:00:14 |
|*  5 |   TABLE ACCESS FULL | TBL_LAST_SEQ_NUM           |  8301 |   154K|    13
  (16)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
   1 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND
              "A"."FW_VER"=TO_NUMBER("B"."FW_VER"))
   2 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV" AND
              "A"."FW_VER"="C"."FW_VER" AND "A"."PROD"="C"."PROD" AND "A"."LOT_T
YPE"="C"."LOT_TYPE")
       filter("C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
   3 - filter("A"."FW_VER"=2)
PLAN_TABLE_OUTPUT
   4 - filter("C"."FW_VER"=2)
   5 - filter(TO_NUMBER("B"."FW_VER")=2)
24 rows selected.

Guys thank you for your help.
I changed the type of the offending column and the plan looks a lot better and results seem a lot quicker.
However I have added to my SELECT, quite substantially, and have a new explain plan.
There are two sections in particular that have a high cost and I was wondering if you seen anything inherently wrong or can explain more fully what the PLAN_TABLE_OUTPUT descriptions are telling me - in particular
INDEX FULL SCAN
PLAN_TABLE_OUTPUT
Plan hash value: 3665357134
| Id  | Operation                        | Name                           | Rows
  | Bytes | Cost (%CPU)| Time     |
PLAN_TABLE_OUTPUT
|   0 | SELECT STATEMENT                 |                                |
4 |   316 |    52   (2)| 00:00:01 |
|*  1 |  VIEW                            |                                |
4 |   316 |    52   (2)| 00:00:01 |
|   2 |   WINDOW SORT                    |                                |
4 |   600 |    52   (2)| 00:00:01 |
|*  3 |    TABLE ACCESS BY INDEX ROWID   | TBL_HIST_METRICS_AT_OP_LKP     |
1 |    71 |     1   (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|   4 |     NESTED LOOPS                 |                                |
4 |   600 |    51   (0)| 00:00:01 |
|   5 |      NESTED LOOPS                |                                |    7
5 |  5925 |    32   (0)| 00:00:01 |
|*  6 |       INDEX FULL SCAN            | UNIQUE_LAST_SEQ                |    8
9 |  2492 |    10   (0)| 00:00:01 |
|   7 |       TABLE ACCESS BY INDEX ROWID| TBL_CURRENT_LOT_STATUS_DIM     |
PLAN_TABLE_OUTPUT
1 |    51 |     1   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN          | TBL_CUR_LOT_STATUS_DIM_IDX1    |
1 |       |     1   (0)| 00:00:01 |
|*  9 |      INDEX RANGE SCAN            | TBL_HIST_METRIC_AT_OP_LKP_IDX1 |    2
9 |       |     1   (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
   1 - filter("SEQ"=1)
   3 - filter("C"."FW_VER"=2 AND "A"."PROD"="C"."PROD" AND "A"."LOT_TYPE"="C"."L
OT_TYPE" AND
              "C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
   6 - access("B"."FW_VER"=2)
       filter("B"."FW_VER"=2)
PLAN_TABLE_OUTPUT
   8 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND "A
"."FW_VER"=2)
   9 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV")

Similar Messages

  • Efficient way to read through big explain plan and genterate html explain plan for sql id's executed in past

    Hi,
    I have a sql which is recently having a performance problems in Production. I have generated a explain plan for it trying to find out what it is doing but plan itself is close to 1000 lines. I want to check if there is any efficient way to go through big plan like this one and quickly find the damaging areas..
    2) I also wanted to know if there is way to generate explain plans in HTML format which executed in past and have entry in dba_hist_sqltext.
    3) I also have two sql_monitor reports which I want to compare. is there any efficient way to do it as well?
    Please share your thoughts!
    Thanks in advance!
    Regards,
    Suman-

    Hi,
    I suggest you can try running sql advisor on the query maybe something fruitful comes up
    http://www.oracle-base.com/articles/11g/sql-access-advisor-11gr1.php
    I am not sure about the explain plan being printed in html format but
    You may also want to try the sqlhistory.sql query from below page
    http://evdbt.com/scripts/
    I have used it many times to check on executions and explain plans which may have changed over the period
    I have faced it many times , the query picks up a bad explain plan and performs poorly

  • Query tunning in Oracle using Explain Plan

    Adding to my below question: I have now modified the query and the path shownby 'Explain plan' has reduced. The 'Time' column of plan_table is also showing much lesser value. However, some people are suggesting me to consider the time required by the query to execute on Toad. Will it be practical? Please help!!
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...
    Edited by: 885901 on Sep 20, 2011 2:10 AM

    885901 wrote:
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...how fast is fast enough?

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Query performance and Transport

    Guys, (Those of you that already work in the field maybe able to better answer this question.) 
    Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
    Thanks,
    RG

    Aggregates are created directly in production.
    We create aggregate based on query statistics...which is based on data read by query and various times.
    Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

  • Help reading an explain plan

    Here is an except from an explain plan.
    what does index$_join_002 mean?
    | Id  | Operation                         | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                             |     1 |    74 |   133K  (1)| 00:31:04 |
    |*  1 |  FILTER                           |                             |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI            |                             | 26855 |  1940K| 10561   (1)| 00:02:28 |
    |*  3 |    VIEW                           | index$_join$_002            | 66364 |   907K|   851   (2)| 00:00:12

    I am having trouble forcing a good plan. I have 2 databases with comparable data sets and the indexes are the same. One has a good plan and one a bad plan. When I analyze the tables in the database with the good plan, it changes to the bad plan.
    Definiton of good plan: returns in 3 seconds
    definition of bad plan: returns in 6 minutes.
    Note that as soon as i analyzed the tables, I started getting the bad plan. I am at a loss on how to force the plan to change.
    BAD PLAN
    | Id  | Operation                         | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                             |     1 |    70 | 62008   (1)| 00:14:29 |
    |*  1 |  FILTER                           |                             |       |       |            |          |
    |*  2 |   HASH JOIN SEMI                  |                             | 29528 |  2018K|  1501   (1)| 00:00:22 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID    | EXT_T                       | 29528 |  1614K|  1344   (0)| 00:00:19 |
    |   4 |     DOMAIN INDEX                  | EXT_TEXT_IND                      |       |       | 13441   (0)| 00:03:09 |
    |   5 |    MAT_VIEW ACCESS BY INDEX ROWID | KM_MV                    |   107K|  1464K|   155   (0)| 00:00:03 |
    |*  6 |     INDEX RANGE SCAN              | KAP_IND                            |   107K|         |   136   (1)| 00:00:02 |good plan. I dont get this when I analyze my tables.
    | Id  | Operation                         | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                             |     1 |    74 |   133K  (1)| 00:31:04 |
    |*  1 |  FILTER                           |                             |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI            |                             | 26855 |  1940K| 10561   (1)| 00:02:28 |
    |*  3 |    VIEW                           | index$_join$_002            | 66364 |   907K|   851   (2)| 00:00:12 |
    |*  4 |     HASH JOIN                     |                             |       |       |            |          |
    |*  5 |      INDEX RANGE SCAN             | KAP_IND | 66364 |   907K|   105   (1)| 00:00:02 |
    |   6 |      INDEX FAST FULL SCAN         | KAP_PK                      | 66364 |   907K|   742   (1)| 00:00:11 |
    |*  7 |    TABLE ACCESS BY INDEX ROWID    | EXT_T      | 26855 |  1573K|  9709   (0)| 00:02:16 |
    |   8 |     DOMAIN INDEX                  | EXT_TEXT_IND     |       |       |  9709   (0)| 00:02:16 |Edited by: Guess2 on Jul 1, 2009 9:06 AM
    Edited by: Guess2 on Jul 1, 2009 9:07 AM
    Edited by: Guess2 on Jul 1, 2009 9:07 AM
    Edited by: Guess2 on Jul 1, 2009 9:08 AM

  • Query Performance and Cache

    Hi,
    I am trying to analyze Query Performance in ST03N.
    I run a Query with Property = Cache Inactive.
    I get Runtime as say 180 sec.
    Again when I run the same Query the Runtime is getting decreased .Now it is say 150 sec.
    Pls expalin me the reason for the less runtime when there is No Cache used.

    Could be two things occurring.  When you say cache is inactive - that is probably referring to the OLAP Global Cache.  The use session, still has alocal cache so that if they run execute the same navigation in the same session, results are retrieved from local cache.
    Also,as the others have mentioned, if the same DB query is really being executed on the DB a second time, not only is the SQL stmt parsing/optimization not need to occur, but almost certainly some/all of the data still remains in the DB's buffer, so it only needs to be retrieved from memory, rather than physically reading the disk again.

  • Help me in reading/understanding Explain Plan

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    SELECT SRC2.PGM_MSTR_NBR,
                                                  SRC2.PGM_TRK_NBR,
                                                  SRC2.CNTL_LOCN,
                                                  SRC2.PGM_NAME,
                                                  SRC2.PGM_STS,
                                                  SRC2.SIS_PGM_START_DATE,
                                                  SRC2.SIS_PGM_END_DATE,
                                                  SRC2.AWARD_TRGT,
                                                  SRC2.AWARD_MAX,
                                                  SRC2.COMPLNC_TYPE,
                                                  SRC2.CMPNY_VNDR_LOCN,
                                                  SRC2.CMPNY_VNDR_NBR,
                                                  LKP3.ADDR_NAME,
                                                  SRC2.INV_IND,
                                                  SRC2.LAST_INV_THRU_DATE,
                                                  SRC2.INV_RPT_DAY,
                                                  SRC2.INV_RPT_CYCLE,
                                                  SRC2.BEG_COLL_DATE,
                                                  SRC2.END_COLL_DATE,
                                                  SRC2.SLS_CONT_DIST,
                                                  SRC2.SLS_CONT_NBR,
                                                  SRC2.ORD_INV_START_DATE,
                                                  SRC2.ORD_INV_END_DATE,
                                                  SRC2.CMPLNC_RPT_IND,
                                                  SRC2.PROD_ENTRY_LVL,
                                                  SRC2.DIST_ENTRY_LVL,
                                                  SRC2.CUST_ENTRY_LVL,
                                                  SRC2.VNDR_ENTRY_LVL,
                                                  SRC2.VNDR_CONT,
                                                  SRC2.SIS_CMPNY_VNDR_NBR
                                                  FROM CASADM.SBA_REB_PGM SRC2
                                                  INNER JOIN(SELECT PGM_MSTR_NBR,PGM_TRK_NBR,CNTL_LOCN  FROM CASADM.ACCR_SIS_PURCH_DTL
                                                             UNION SELECT PGM_MSTR_NBR,PGM_TRK_NBR,CNTL_LOCN  FROM CASADM.ACCR_SIS_EXCL_DTL)ACCR2
                                                        ON  (SRC2.PGM_MSTR_NBR=ACCR2.PGM_MSTR_NBR AND SRC2.PGM_TRK_NBR=ACCR2.PGM_TRK_NBR AND SRC2.CNTL_LOCN=ACCR2.CNTL_LOCN)
                                                  LEFT OUTER JOIN
                                                            CASADM.MT_CMPNY_VNDR LKP3
                                                        ON (LKP3.CMPNY_VNDR_NBR=SRC2.CMPNY_VNDR_NBR);
    Record Count in each table:
    select count(*) from casadm.accr_sis_purch_dtl --375,968
    select count(*) from casadm.accr_sis_excl_dtl --1,988,867
    select count(*) from casadm.sba_reb_pgm --526,133
    select count(*) from casadm.mt_cmpny_vndr --20743
    Execution Plan
    Plan hash value: 1465316812
    | Id  | Operation                    | Name                  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time|
    |   0 | SELECT STATEMENT             |                       |     9 |  1908 | | 26988   (2)| 00:06:18 |
    |   1 |  NESTED LOOPS OUTER          |                       |     9 |  1908 | | 26988   (2)| 00:06:18 |
    |*  2 |   HASH JOIN                  |                       |     9 |  1656 |83M| 26979   (2)| 00:06:18|
    |   3 |    TABLE ACCESS FULL         | SBA_REB_PGM           |   536K|    77M| |  2624   (2)| 00:00:37 |
    |   4 |    VIEW                      |                       |  2364K|    74M| | 16424   (2)| 00:03:50 |
    |   5 |     SORT UNIQUE              |                       |  2364K|    49M|72M| 16424  (86)| 00:03:50|
    |   6 |      UNION-ALL               |                       |       |       ||            |          |
    |   7 |       INDEX FAST FULL SCAN   | ACCR_SIS_PURCH_DTL_PK |   375K|  8077K||   871   (1)| 00:00:13 |
    |   8 |       TABLE ACCESS FULL      | ACCR_SIS_EXCL_DTL     |  1988K|    41M||  5634   (1)| 00:01:19 |
    |   9 |   TABLE ACCESS BY INDEX ROWID| MT_CMPNY_VNDR         |     1 |    28 | |     1   (0)| 00:00:01 |
    |* 10 |    INDEX UNIQUE SCAN         | MT_CMPNY_VNDR_PK      |     1 |       | |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SRC2"."PGM_MSTR_NBR"="ACCR2"."PGM_MSTR_NBR" AND
                  "SRC2"."PGM_TRK_NBR"="ACCR2"."PGM_TRK_NBR" AND "SRC2"."CNTL_LOCN"=
    "ACCR2"."CNTL_LOCN")
      10 - access("LKP3"."CMPNY_VNDR_NBR"(+)="SRC2"."CMPNY_VNDR_NBR")
    Statistics
            102  recursive calls
              0  db block gets
          34558  consistent gets
          23241  physical reads
             88  redo size
         531258  bytes sent via SQL*Net to client
           2760  bytes received via SQL*Net from client
            361  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
           5392  rows processed

    With respect to HASH JOIN it has two operands a TABLE ACCESS FULL and VIEW. Oracle joins these results together based on the following access predicate:
    2 - access("SRC2"."PGM_MSTR_NBR"="ACCR2"."PGM_MSTR_NBR" AND
                  "SRC2"."PGM_TRK_NBR"="ACCR2"."PGM_TRK_NBR" AND "SRC2"."CNTL_LOCN"="ACCR2"."CNTL_LOCN")The VIEW is the result of the union between the CASADM.ACCR_SIS_PURCH_DTL and CASADM.ACCR_SIS_EXCL_DTL where instead of accessing the table CASADM.ACCR_SIS_PURCH_DTL Oracle was able to acquire all the data from the index ACCR_SIS_PURCH_DTL_PK .
    The TABLE ACCESS FULL is exactly what says a full table access of SBA_REB_PGM.
    The NESTED LOOPS OUTER has two operands like the HASH JOIN. The way this works is that for each result returned by the HASH JOIN operation it accesses the index MT_CMPNY_VNDR_PK and then the table MT_CMPNY_VNDR based on the following access predicate:
    10 - access("LKP3"."CMPNY_VNDR_NBR"(+)="SRC2"."CMPNY_VNDR_NBR")It'd probably be worthwhile for you to review the following paper by Christian Antognini Interpreting Execution Plans as well.

  • Query Performance and Result Set Query/Sub-Query

    Hi,
    I have an infoset with as many as <b>15 joins</b> with different ODS and Master Data..<b>The ODS are quite huge with 20 million, 160 million records)...</b>Its taking a lot of time even for a few days and need to get atleast 3 months data in a reasonable amount of time....
    Could anyone please tell me whether <b>Sub-Query or Result Set Query have to be written against the same InfoProvider</b> (Cube, Infoset, etc)...or they can be used in queries
    on different infoprovider...
    Please suggest...Thanks in Advance.
    Regards
    Anil

    Hi Bhanu,
    Needed some help defining the Precalculated Value Set as I wasnt succesful....
    Please suggest answers if possible for the questions below
    1) Can I use Filter Criteria for restricting the value set for the characteristic when I Define a Query on an ODS. When i tried this it gave me errors as below  ..
    "System error in program CL_RSR_REQUEST and form  EXECUTE_VTABLE:COB_PRO_GET_ALWAYS....                     Error when filling value set DELIVERY..                                               
    Job cancelled after system exception ERROR_MESSAGE"                                           
    2) Can I create a create a Value Set predefined on an Infoset -  Not Succesful as Infoset have names such as "InfosetName_F000232" and cannot find the characteristic in Paramter Tab for Value Set in Reprting Agent.
    3) How does the precalculated value Set variable help if I am not using any Filtering Criteria and is storing the List of all values for the Variable from the ODS which consists of 20 millio records.
    Thanks for your help.
    Kind Regards
    Anil

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How i can obtain explain plan without run the query

    Hello,
    i need to know the result of explain of a query without run the query, it's this possible?
    Thanks and best regards.

    explain plan for <your query>;
    select * from table(dbms_xplan.display);Regards,
    Rob.

  • Query returns different Explain Plan

    Hi,
    I have two databases that are identical to each other. But I have a query that runs on both and returns different explain plans in both. It runs slower in one than other. How can I troubleshoot this further to get faster times in the slower database?
    THanks,
    Prachi

    Query:
    =====
    SELECT sum(total_contacts), sum(email_partner_news), sum(email_tech_news),
    sum(email_enduser_news), sum(email), sum(email_gmo), sum(gmo_nophone),
    sum(email_gmo_noaddress), sum(email_gmo_address), sum(
    email_gmo_nocertaddr), sum(email_gmo_certaddress), sum(
    email_gmo_nophone_noaddr), sum(phone_number), sum(gmo_with_phone),
    sum(phone_number_email_gmo_noaddr), sum(phone_number_email_gmo_nocert),
    sum(phone_number_address), sum(phone_number_address_noemail), sum(
    phone_address_email_nogmo), sum(phone_number_address_email), sum(
    phone_number_certaddr), sum(phone_number_certaddr_noemail), sum(
    phone_certaddr_email_nogmo), sum(phone_number_address_gmo), sum(
    phone_certaddr_email_gmo), sum(address), sum(
    address_nophone_number_nogmo), sum(address_certified), sum(
    certaddr_nophone_nogmo), sum(address_email_nophone_gmo), sum(
    certaddr_email_nophone_gmo), sum(phone_gmo), sum(smail_gmo), sum(fax_gmo
    ), sum(email_wcast), sum(email_inews), sum(email_salrt), sum(
    phone_gmo_phone), sum(smail_gmo_address), sum(fax_gmo_fax)
    FROM contact_values_vw_tc_2 cv, ((SELECT gcd_contact_id
    FROM contact_values)
    INTERSECT
    ((SELECT gcd_contact_id
    FROM gcddata.contact_country s, gcd.country c,
    segmentation_query_values v
    WHERE s.country_id = c.country_id
    AND c.region_id = v.query_value
    AND v.selection_type = 'I'
    AND v.query_id = 2088
    AND v.query_sequence = 1)
    INTERSECT
    (SELECT gcd_contact_id
    FROM gcddata.contact_country s, gcd.country c,
    segmentation_query_values v
    WHERE s.country_id = c.country_id
    AND c.sub_region_id = v.query_value
    AND v.selection_type = 'I'
    AND v.query_id = 2088
    AND v.query_sequence = 2)) ) sl
    WHERE cv.gcd_contact_id = sl.gcd_contact_id
    AND cv.country_id IN (SELECT cm.country_id
    FROM segmentation_user_country_map cm
    WHERE cm.user_name = 'E30590')
    ========================================
    Execution Plan - FAST
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=32931 Card=1 Bytes=73)
    1 0 SORT (AGGREGATE)
    2 1 HASH JOIN (Cost=32931 Card=386388 Bytes=28206324)
    3 2 VIEW (Cost=29617 Card=1142043 Bytes=14846559)
    4 3 INTERSECTION
    5 4 SORT (UNIQUE)
    6 5 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1907737 Bytes=11446422)
    7 4 INTERSECTION
    8 7 SORT (UNIQUE)
    9 8 HASH JOIN (Cost=655 Card=1998575 Bytes=57958675)
    10 9 HASH JOIN (Cost=6 Card=110 Bytes=2200)
    11 10 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
    12 11 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_ PK' (UNIQUE) (Cost=2 Card=15)
    13 10 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
    14 9 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
    15 7 SORT (UNIQUE)
    16 15 HASH JOIN (Cost=655 Card=1142043 Bytes=33119247)
    17 16 HASH JOIN (Cost=6 Card=63 Bytes=1260)
    18 17 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
    19 18 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=15)
    20 17 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
    21 16 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
    22 2 HASH JOIN (Cost=2174 Card=645445 Bytes=38726700)
    23 22 SORT (UNIQUE)
    24 23 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP' (Cost=5 Card=43 Bytes=473)
    25 22 TABLE ACCESS (FULL) OF 'CONTACT_VALUES1' (Cost=2151 Card=1907737 Bytes=93479113)
    =====================================================
    Execution Plan -SLOW
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=8751 Card=1 Bytes=73 )
    1 0 SORT (AGGREGATE)
    2 1 HASH JOIN (SEMI) (Cost=8751 Card=14477 Bytes=1056821)
    3 2 MERGE JOIN (Cost=8583 Card=89527 Bytes=5550674)
    4 3 TABLE ACCESS (BY INDEX ROWID) OF 'CONTACT_VALUES1' ( Cost=826 Card=1852109 Bytes=90753341)
    5 4 INDEX (FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=26 Card=1852109)
    6 3 SORT (JOIN) (Cost=7757 Card=89527 Bytes=1163851)
    7 6 VIEW (Cost=7454 Card=89527 Bytes=1163851)
    8 7 INTERSECTION
    9 8 SORT (UNIQUE)
    10 9 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1852109 Bytes=11112654)
    11 8 INTERSECTION
    12 11 SORT (UNIQUE)
    13 12 HASH JOIN (Cost=640 Card=250676 Bytes=7269604)
    14 13 HASH JOIN (Cost=6 Card=28 Bytes=560)
    15 14 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
    16 15 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
    17 14 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
    18 13 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
    19 11 SORT (UNIQUE)
    20 19 HASH JOIN (Cost=640 Card=89527 Bytes=25962 83)
    21 20 HASH JOIN (Cost=6 Card=10 Bytes=200)
    22 21 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
    23 22 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
    24 21 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
    25 20 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
    26 2 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP'
    (Cost=5 Card=45 Bytes=495)

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

  • Regarding Query Performance.

    Please tell me ..
    How can i check the performance of query using index and without using index?

    Hi,
    As deepak said, query with and without index.
    to get the cost and access path of the query get explain for the query. Read the Oracle docs
    [http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm]
    It is common to all the version from 9i as far as I know.
    do
    SQL> set autotrace on
    SQL> run your query
    you will get the explain plan and cost information in the sql prompt if the statistis_level is set to typical or all
    Regards,
    Vijayaraghavan K
    Edited by: Vijayaraghavan Krishnan on Jan 18, 2010 4:24 PM

Maybe you are looking for