Cost of query

Does the cost of query depends on the number of rows in the table?
Also does the cost change from database version to version,
like if i check the cost of query in oracle 8i and if i check same
query in oracle 9i, will there be any difference in the cost of query?

Yogesh,
All pedantry aside, the cost does broadly reflect the resource consumption associated with a given query. Accessing a greater number of rows will usually result in a greater cost. And yes, the cost may change greatly from one version of the database to another. Up through 9i the cost was meant to reflect i/o consumption. As of 10g it reflects both i/o and cpu consumption.
However, you probably can't do anything useful with the cost and it can in fact mask problems. Here's one reasonably common problem. You have a staging table into which you load data before processing it. You gather statistics when the table is empty, and then load several million rows into it. Now the optimizer associates a very low cost with fetching data from the table. It will generate queries with very low costs that involve repeatedly reading the entire contents of the table. These queries will perform miserably.
The solution is to gather statistics on this table after loading it. At this point appropriate plans will be generated that perform well. The cost of these queries will be much larger than the cost of the old queries that performed miserably.
And in all of the above I'm being terribly simplistic. I use the cost as follows: when I see an execution plan with a substantially lower or higher cost for a given step than makes sense when compared with the other steps in the plan then I assume that the optimizer does not have enough information.

Similar Messages

  • Add "cost center" query to a start condition?

    Hi there,
    we got a new requirement for one of our plants.
    We're on SRM 5.0 classic scenario.
    Is it possible to add a "cost center" query to a specific start condition (SWB_PROCUREMENT) of a workflow?
    E.g. if a user uses cost center 4711 for a shopping cart item a specific cost center responsible xyz should approve this item.
    If the user uses another cost center 4712 for a second item in this shopping cart this item should be approved by another cost center responsible abc.
    Is that somehow possible ?
    So far I did not find a suitable expression for cost center.
    Thanks in advance for your answers.
    Best regards,
    Henning

    Hi Masa,
    thanks for your answer. Perhaps you also have a hint for the following:
    I can't really find in the metioned thread or in note 731637 what happens if a SC with several items is partially approved.
    Example:
    SC with 3 items:
    item 1 cc 1000
    item 2 cc 2000
    item 3 cc 1000
    Let's say item 1+3 have been approved by the approver found by badi and WS14500015. Is a PO or a purchase requisition created in backend? Or is it only created after the whole SC has been approved (i.e. also item 2).
    Thanks for a hint and best regards,
    Henning

  • Item Cost in query

    Hi expert
    how can i find last item Cost via query .

    Item cost depends on the inventory valuation method.
    You can view the item cost in the Inventory audit report-cost column,if its not visible in the report then check the form setting-cost.
    If you would like to have a query report,you can use OINM table-price field
    There are two system reports available. One is Inventory Audit Report. Another is Inventory Valuation Report. When you use right mouse click to pop up menu from Item Master, you could view Inventory Valuation Report within current year.

  • Cost center query tkakes a long time while executing with User's Id

    Hi Experts,
    We have a cost-center query which is taking a long time to display the output with User's id.
    I tried running the report with the same selections and was able to get the values within seconds.
    Also we have maintained aggregates on the cube.
    When user tries it for a single cost-center the performance is Ok.
    Any help on this wil be highly appreciated.
    Thanks,
    Amit

    Hi,
    while running the query find the trace in ST05 - before running the query in RSRT activate the trace with user id and after seeing the report in RSRT deactivate the trace.
    go through the logs find the which object taking long time then create the aggregates on the cube.
    while creating agggates give the fixed value.
    please find the doc " how to find the SQL traces in sap bi"
    Thanks,
    Phani.

  • Query 1 shows less consistent gets but more cost than Query 2..

    Hi ,
    SQL> select dname from scott.dept where deptno not in (select deptno from scott.emp)
    Ðñüãñáììá åêôÝëåóçò
    Plan hash value: 3547749009
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    1 |    22 | 4 (0)| 00:00:01 |
    |*  1 |  FILTER            |      |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL| EMP  |    11 |   143 |  2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "SCOTT"."EMP" "EMP"
                  WHERE LNNVL("DEPTNO"<>:B1)))
       3 - filter(LNNVL("DEPTNO"<>:B1))
    Note
       - dynamic sampling used for this statement
    ÓôáôéóôéêÜ
              0  recursive calls
              0  db block gets
             15 consistent gets
              0  physical reads
              0  redo size
            416  bytes sent via SQL*Net to client
            384  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>
    SQL> select dname from scott.dept,scott.emp where dept.deptno=emp.deptno(+)
      2    and emp.rowid is null;
    Ðñüãñáììá åêôÝëåóçò
    Plan hash value: 2146709594
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |   12 |   564 | 5 (20)| 00:00:01 |
    |*  1 |  FILTER             |      |       |       |            |          |
    |*  2 |   HASH JOIN OUTER   |      |    12 |   564 | 5 (20)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL| EMP  |    12 |   300 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("EMP".ROWID IS NULL)
       2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO"(+))
    Note
       - dynamic sampling used for this statement
    ÓôáôéóôéêÜ
              0  recursive calls
              0  db block gets
              6 consistent gets
              0  physical reads
              0  redo size
            416  bytes sent via SQL*Net to client
            384  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedI have two questions:
    1) which one is preferable.... the first which is less costy to the system or the second which causes less consistent gets to the system and so is considered to be more scalable..????
    2)Whereas the number of rows returned by both queries is 1.. why the is difference in the underlined values in the two queries (values 1 and 12 respectively)?
    I use Oracle10g v.2
    Thanks.. a lot
    Sim

    The less logical I/O's the better.
    So always do it like your query 2 (btw. your title is the wrong way)
    Your example is probably flawed. If I try it in SQL*Plus I get correct results:
    SQL> get t
      1* select dname from dept where deptno not in (select deptno from emp)
    SQL> /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=3 Bytes=39)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
       3    1     TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=1 Bytes=3)
    Statistics
              0  recursive calls
              0  db block gets
             15  consistent gets
              0  physical reads
              0  redo size
            537  bytes sent via SQL*Net to client
            660  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> get tt
      1  select dname from dept,emp where dept.deptno=emp.deptno(+)
      2* and emp.rowid is null
    SQL> /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=14 Bytes=322)
       1    0   FILTER
       2    1     HASH JOIN (OUTER) (Cost=5 Card=14 Bytes=322)
       3    2       TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
       4    2       TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=140)
    Statistics
              0  recursive calls
              0  db block gets
              6  consistent gets
              0  physical reads
              0  redo size
            537  bytes sent via SQL*Net to client
            660  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> I'm wondering for instance why you have there 11 rows in emp for query 1 (should be only 1 row) and why you have only 12 rows in query 2 (should be 14 rows)

  • Formula Variable - Key of Cost Element (Query crashing)

    Hi everyone,
    I have created a formula variable which returns the Key of the Cost Element (0costelmnt) which resides in a Key figure structure. I have used this in a number of formula (If Cost Element = 50000 * (A * B) and its working fine.
    Formula Variable
    Formula Variable Properties
    My issue is that this query crashes constantlywhen you are in the Query Designer in both our Dev and Production environments, yet the query runs fine in the Analyser.
    The only thing I note is the icons in the key figure structure, some squares which are standard key figures / formulas and formula variables but my Cost Element Form Vari and the Bonus and Salary Increase Formula variables are represented by a triangle which normally means characteristics!!
    Any suggestions?
    With thanks
    Gill

    They query runs perfectly,  there are no null values in the output.
    I have already regenerated it and no change.  I do not have an issue with the output so RSRT is going to give me the same results as I get in the Analyser.
    The issue I have is that any changes I make in Query Designer, the programs shuts down.

  • Cost Allocation Query

    I want to allocate cost in top down approach via SQL i.e user input should be 1.Expense amount for node 1 (which need to be allocate
    in lower nodes) 2. Percentage of allocation for each nodes
    Node 1
    Node 2 node3
    node4 node 5 node 6 node 7
    if node 1 gets 100 $.then it should get allocated in node 2 and node 3 in 40-60%(User input). also it should get allocated further in node 4 and 5 basis user input percentage and also in node 6 and 7.

    Can you supply a script  (CREATE TABLE.... INSERT INTOs...) for quick assistance?  Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Cost of query, using Xpath

    Hi,
    I have a query:
    SELECT distinct EXTRACTVALUE (VALUE(M), '/MedlineCitation/MedlineID') AS MEDLINE_ID,
    extractValue(value(s),'/QualifierName') AS QUALIFIER,
    extractValue(value(t),'/DescriptorName') AS MESH_HEADING,
    extractValue(value(t),'/DescriptorName/@MajorTopicYN') AS MAJOR
    FROM MEDLINECITATIONS M,
    TABLE ( xmlsequence (
    extract(value(M),
    'MedlineCitation/MeshHeadingList/MeshHeading/DescriptorName'))) t,
    TABLE ( xmlsequence (
    extract(value(M),
    'MedlineCitation/MeshHeadingList/MeshHeading/QualifierName'))) s
    COST ALL ROWS (optimizer: CHOOSE), total cost: 52,222,538,544
    Is there a way to optimize this query. I'm working with 100,000 of rows and it's very slow.

    Maxim
    How many collections (nodes that can occur more than once) are there in the schema..
    You might want to try the following:
    In XMLSPY, Under the DTD / Schema menu, select enable Oralce Schema Extensions and then select Oracle Schema Options, storeVarrayAsTable
    Under the root element add the annotation xdb:defaultTable="YOURTABLENAME"
    Drop and re-register the schema. Let XMLDB create 'YOURTABLENAME' for you as part of the registrations process
    Now try the query
    select table_name from user_nested_tables where table_name = 'YOURTABLENAME'.
    This will tell us whether the default approach of storing all collections as nested tables is too general for your schema.
    What we are trying to do here is ensure that the collections we want to query are stored as seperate rows in a nested table rather than as a VARRAY of objects stored in a LOB..

  • Cost Based Query

    i have removed the RULE hints and tried the below query in 2 different databases both same version(10g)
    In one of the database query completed in 4hrs and in another database it completed in 3 days.
    Both databases has similar data.
    where i am doing wrong?
    Any feeedbacks?
    Any optimizer parameters needs to be set?
    lect /*+ RULE */ 'PO_COMMITMENT' record_type,
    b.org_id,
    NULL invoice_number,
    a.PO_NUMBER,
    a.PO_REVISION,
    a.RELEASE_NUMBER,
    a.CREATION_DATE,
    a.APPROVED_DATE,
    b.NEED_BY_DATE,
    b.PROMISED_DATE,
    a.BUYER_NAME,
    a.VENDOR_NAME,
    a.PO_LINE,
    replace(a.ITEM_DESCRIPTION, chr(10),' ') item_description,
    a.QUANTITY_ORDERED,
    a.AMOUNT_ORDERED,
    a.QUANTITY_CANCELLED,
    a.AMOUNT_CANCELLED,
    a.QUANTITY_DELIVERED,
    a.AMOUNT_DELIVERED,
    a.QUANTITY_INVOICED ,
    a.AMOUNT_INVOICED*nvl(pod.rate,1) AMOUNT_INVOICED,
    a.Amount_outstanding_invoice,
    a.PROJECT_ID,
    a.TASK_ID,
    a.EXPENDITURE_ITEM_DATE,
    a.ACCT_EXCHANGE_RATE,
    a.denom_CURRENCY_CODE,
    a.PO_HEADER_ID,
    a.PO_RELEASE_ID,
    pod.po_line_id REQUISITION_HEADER_ID,
    a.po_line_location_id REQUISITION_LINE_ID ,
    pod.po_distribution_id invoice_id,
    EXPENDITURE_ORGANIZATION ,
    null po_status,
    pod.accrue_on_receipt_flag po_line_status,
    null requisioner_name,
    0 commitment_amt,
    pod.po_header_id xpo_header_id,
    pod.po_distribution_id xpo_distribution_id,
    pod.distribution_num DISTRIBUTION_LINE_NUMBER
    from pa_proj_appr_po_distributions a,
    po_distributions pod,
    po_line_locations b
    where a.PO_LINE_LOCATION_ID = b.LINE_LOCATION_ID
    and a.po_distribution_id = pod.po_distribution_id
    and b.line_location_id = pod.line_location_id
    and b.org_id = :p_org_id
    and a.project_id > 0
    UNION ALL
    SELECT /*+ RULE */ 'REQ_COMMITMENT' ,
    :p_org_id,
    NULL,
    REQ_NUMBER ,
    NULL ,
    NULL,
    CREATION_DATE ,
    to_date(null),
    NEED_BY_DATE ,
    to_date(null),
    null,
    vendor_name,
    REQ_LINE ,
    replace(ITEM_DESCRIPTION, chr(10), ' '),
    QUANTITY ,
    AMOUNT ,
    0,
    0,
    0,
    0,
    0,
    0,
    0,
    PROJECT_ID ,
    TASK_ID ,
    EXPENDITURE_ITEM_DATE ,
    ACCT_EXCHANGE_RATE ,
    denom_CURRENCY_CODE,
    0,
    0,
    REQUISITION_HEADER_ID,
    REQUISITION_LINE_ID ,
    0 invoice_id,
    EXPENDITURE_ORGANIZATION ,
    null po_status,
    null po_line_status,
    REQUESTOR_NAME requisioner_name,
    AMOUNT ,
    0,
    0,
    0 DISTRIBUTION_LINE_NUMBER
    FROM pa_proj_appr_req_distributions
    union all
    select /*+ RULE */ 'INVOICE_COMMITMENT' ,
    :p_org_id,
    b.INVOICE_NUM,
    NULL,
    NULL,
    NULL,
    b.creation_date,
    a.INVOICE_DATE,
    a.GL_DATE ,
    to_date(null),
    NULL,
    a.VENDOR_NAME,
    0,
    replace(a.DESCRIPTION,chr(10),' '),
    0,
    0,
    0,
    0,
    0,
    0,
    a.QUANTITY,
    a.AMOUNT ,
    0,
    a.PROJECT_ID ,
    a.TASK_ID ,
    a.EXPENDITURE_ITEM_DATE ,
    a.ACCT_EXCHANGE_RATE ,
    a.denom_CURRENCY_CODE ,
    0,
    0,
    aid.po_distribution_id,
    aid.invoice_distribution_id,
    a.invoice_id,
    a.EXPENDITURE_ORGANIZATION ,
    null,
    null,
    null,
    a.amount ,
    0,
    0,
    a.DISTRIBUTION_LINE_NUMBER
    FROM
    pa_proj_ap_inv_distributions a,
    ap_invoices b,
    ap_invoice_distributions aid
    where a.invoice_id = b.invoice_id
    and aid.invoice_id = b.invoice_id
    and a.distribution_line_number = aid.distribution_line_number

    Hello,
    It's duplicate post, please close this and provide requested information in your previous post.
    Regards

  • Cost Centre Query

    Hi All,
    What are the standard DSO's and Cubes used in Cost Centre reporting.
    0CO_OM_CCA_1
    0CO_OM_CCA_7
    0CO_OM_CCA_8
    0CO_OM_CCA_9
    0CO_OM_CCA_10
    Can we use
    0CO_OM_WBS_1
    0CO_OM_WBS_6
    0CO_OM_WBS_7
    0CO_OM_WBS_8
    in place of CCA datasources or are the WBS used at project level reporting only.
    Appreciate any help.
    James

    hi,
    there can be scenarios where WBS master data might not be useful to you where ur project has one WBS and multiple CC. and if you want to see the WBSE actual posted to CC.WBSE Master Data doesnt show up these details.
    also,
    0CO_OM_WBS_1 gives actual/commitments at summary level and not at detail level.
    0CO_OM_WBS_6 gives WBSE:CC relation for Actuals.
    0CO_OM_WBS_7 does not give WBSE:CC relation for Commitments
    similarly for CCA DS as well
    so it  will depends on your business requirements.
    You need to decide fields you need DataSources, if required doing necesarry enhancements in DS
    regards
    laksh

  • Cost of Dynamic Query

    Did we can write a Query which have Bind Variables, If variable is null than all rows other wise selected rows, But cost of Query is still same.
    When i do it my cost of query increased because table full scanned.
    Give me some trick to avoid full table scan.

    D explain plan is as following
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE          1           113                     
    SORT GROUP BY          1      172      113                     
    NESTED LOOPS          1      172      104                     
    NESTED LOOPS          1      152      104                     
    NESTED LOOPS          1      110      100                     
    TABLE ACCESS FULL     APPAREL.SHIPMENTM     15      870      55                     
    TABLE ACCESS BY INDEX ROWID     APPAREL.PACKINGINSTRUCTIOND     274 K     13 M     3                     
    INDEX RANGE SCAN     APPAREL.PACKINGINSTRUCTIOND_P_S_S_I_PK     1           2                     
    TABLE ACCESS BY INDEX ROWID     APPAREL.BOXBARCODE     1      42      4                     
    INDEX RANGE SCAN     APPAREL.BOXBARCODE_PO_STL_SH_ID_IDX     1           3                     
    INDEX UNIQUE SCAN     APPAREL.SHIPMENTD_PK     34      680

  • How to change average cost of an item?

    We have a scenario where the average cost of the goods has gone completely off of what we think it should be for a few items. Many reasons. But question is how do we change average cost assigned to these items without adding to the inventory?

    Hi,
    You can change the average cost of an item by performing an average cost update.
    The inventory quantity remains unaffected, but depending on whether you are increasing or decreasing the unit cost, the inventory value will get affected to that extent.
    Inventory > Costs > Average Cost Update - query the concerned item and update the cost as required.
    Thanks,
    DS

  • Query using Materalized view in oracle 9i and 10g

    Hello
    There are snapshots (materialized view) used in my application. We have recently migrated from 9i to 10g release 2 database.
    After migration i have observed explain plan of query which is using materialized view and i found in 9i oracle treating materalized view as table. In 10g oracle is considerting it as MVIEW only (MAT_VIEW ACCESS BY INDEX ROWID). However in 10g cost of query which is using materialized view is much higher than 9i. And execution time is also random.
    Can anbody pls. expalin diff. of materalized view access in oracle 9i and 10g.
    Thanks

    can you post your query with explain plan for both 9i version and 10g version.
    Thanks,
    karthick.

  • Planned and actual cost

    hi
    after doing cat5 the cost must transfer to the ps, in my case the plan/actual% is showing 100% at the activity level, deliberately I enter different time in the time sheet to see the difference in %, but when i enter time difference in wbs level in cat2 and after doing cat7 i can c the difference in actual and planed cost.
    my query is i want to see the difference in plan/actual at the activity level or u can say after doing cat5 so any one can help me on this and what setting to b changed.
    thanx in advance
    subhro

    hi
    while seeing act/plan/variance report using t code S_ALR_87013543
    subhro

  • Cost in explain plan vs elapsed time

    hi gurus,
    i have two tables with identical data volume(same database/schema/tablespace) and the only difference between these two is, one partitioned on a date filed.
    statistics are up to date.
    same query is executed against both tables, no partition pruning involved.
    select ....... from non-partitioned
    execution plan cost=92
    elapsed time=117612
    select ... from partitioned
    execution plan cost=3606
    elapsed time=19559
    though plan cost of query against non-partitioned is quite less than partitioned, elapsed time in v$sqlarea is higher than partitioned.
    what could be the reason please?
    thanks,
    charles
    Edited by: user570138 on May 6, 2010 6:54 AM

    user570138 wrote:
    if elapsed time value is very volatile(with the difference in explain plan cost) , then how can i compare the performance of the query?Note that the same query with same execution plan and same data can provide different execution times - and due to a number of factors. The primary one being that the first time the query is executed, it performs physical I/O and loads the data into the buffer cache - where the same subsequent query finds this data via cheaper and faster logical I/O in the cache.
    in my case i want to compare the performance change before and after table partitioning.Then look at the actual utilisation cost of the query and not (variant) elapsed execution time. The most expensive operation on a db platform is typically I/O. The more I/O, the slower the execution time.
    I/O can be decreased in a number of ways. The usual approach in the database environment is to provide better or alternative data structures, in order to create more optimal I/O paths.
    So instead of using a hash table and separate PK b+tree index, using an index organised table. Instead of a single large table with indexes, a partitioned table with local indexes. Instead of joining plain tables, joining tables that have been clustered. Etc.
    In most cases, when done correctly, these physical data structure changes do not even impact the application layer. The SQL and code in this layer should be blissfully unaware of the physical structure changes done to improve performance.
    Likewise, changes in the logical layer (data model changes) can also improve performance. This of course does impact the application layer - and requires a proper design and implementation in the physical layer. Often proper data modeling is overlooked and flaws in it is attempted to be fixed by hacking the physical layer.
    my aim is to measure the query performance improvements, if any, by partitioning an existing tableWhy not measure current I/O before an operation, and then after the operation - and use the difference to determine the amount of I/O performed by that operation?
    This can be implemented in a start/stop watch fashion (instead of measuring time, measuring I/O) using the v$sesstat virtual performance view.

Maybe you are looking for

  • Error while generating forms from Designer 2000

    Hi, The following error was encountered while trying to generate forms from the Designer ( using Design Editor ) . CDI-23564 : "C:\ORAWIN95\BIN\CF50G32.DLL" could not be loaded or does not exist . Please check that the product has been installed corr

  • HT1040 Does anyone know what this error means?

    I keep trying to order a book and it won't let me because it always says there was an error uploading! Thanks! Error(when trying to preview book) "The operation couldn't be completed. (com.apple.chameleon.pdfGenerate error 1." Thank you!

  • I want to know about 4G LTE will be worked in Abu Dhabi or not? I have purchased recently.

    I have purchased a I phone 6 64 GB and it will be delivered by Oct-23 to 28. I dont know about 4G LTE means. I want to know whether this technology can be worked in Abu dhabi or not? or i should have to stop delivery. I am very much helpful if you gi

  • Safari and in-page find

    Safari appears not to search within iframe documents if you use the browser command-f (find). The search box is presented but does not find text embedded in an iframe. This appears to be a bug. The case I am testing is a pdf displayed in an iframe. W

  • Office for Mac question - Excel crashing - EXC_BAD_ACCESS

    I am getting this error message when I try to change a tab name in an Excel spreadsheet for Mac.  Help please?? Microsoft Error Reporting log version: 2.0 Error Signature: Exception: EXC_BAD_ACCESS Date/Time: 2013-06-14 16:56:44 +0000 Application Nam