SQL cost

hi ,
Oracle 10.2.0.1
when I try to know the cost of my SQL query the explain plan gives some cost like "range scan", "full scan" etc and some appropriate cost associated . how is this cost computed ? and how do i understand what each scan & corresponding cost imply ?
I try different combinations and based on cost comparision I say that I have optimized the query , but oracle 10g onwards there is default optimization technique too . is it CBO ? also is it possible to move an execution from PGA instance to SGA ??
please help

Hi don't get too hung up on the exact cost of things as it's an internal value that is calculated based on numerous factors. We can't just say, "oh it's the number of rows multiplied by the row size divided by the ratio of values found in the index" or some such thing. Just use it as an indication of where the optimiser considers there will be a lot of work to do. Also bear in mind that what is shown on the explain plan is not necessarily how it will be executed, especially if statistics are not up to date on the tables etc.
Most of the hit for performance issues is around I/O (that is reading data from the disks).
If you have a specific performance issue you want people to help you with then post details here.
When your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting

Similar Messages

  • SQL Cost of using DECODE built-in

    I'm trying to tune some SQL and wondering about the cost of using DECODE. I believe that by using this function, it prevents any index on the related column from being used.
    Do any "Tuning Gurus" have opinions on the use of DECODE?
    TIA,

    As a general rule, the Oracle built-in functions are pretty fast to execute. You will not really see any difference in performance between a sql statement using a built-in function in the SELECT clause, and one not using one.
    However, a lot depends on where and how you use the DECODE.
    SELECT col1,DECODE(col2,1,'YES',2,'NO',3,'MAYBE')
    FROM table
    will be no slower than
    SELECT col1,col2
    FROM table
    If there is an index on col2, then
    SELECT col1,DECODE(col2,1,'YES',2,'NO',3,'MAYBE')
    FROM table
    WHERE col2 BETWEEN 1 AND 3
    will use it.  However
    SELECT col1,col2
    FROM table
    WHERE DECODE(col2,1,'YES',2,'NO',3,'MAYBE') IN ('YES','NO','MAYBE')
    will not (unless you have a function based index).  But,
    SELECT col1,col3
    FROM table
    WHERE col2 BETWEEN 1 and 3 and
          col3 = DECODE(col2,1,'YES',2,'NO',3,'MAYBE') IN ('YES','NO','MAYBE')
    will.Note that "will" in the above really means can. it is up to the optimizer whether it actually is used or not.
    HTH
    John

  • SQL Query Cost Report..

    Hi all,
    I am novice to OEM. I wanted to do / run sql query cost report.. How can i do it in OEM.?? Is it possible to do in OEM..??? Please guide me to solve this problem.??
    Your suggestion will be helpful.
    Thanks,
    -Mahesh.

    Hi,
    Thanks for your reply, but I want info about Oracle Enterprise Manager 10g.
    Actual requirement is like this,
    We have script files written in Linux Shell scripts, which are related to DB activities like Startup DB, Shutdown DB, perfstat, checking free space etc etc. These we want to migrate into OEM using OEM built-in features, few of which we've already migrated. We have analyzed scripts. One of the script is for SQL Query Cost Report (Which i didn't get what it does..)
    So is there any option/inbuilt feature through which we can do "SQL Cost Report".
    Once again thanks for your reply.
    -Mahesh.

  • Using User Defined Function is SQL

    Hi
    I did the following test to see how expensive it is to use user defined functions in SQL queries, and found that it is really expensive.
    Calling SQRT in SQL costs less than calling a dummy function that just returns
    the parameter value; this has to do with context switchings, but how can we have
    a decent performance compared to Oracle provided functions?
    Any comments are welcome, specially regarding the performance of UDF in sql
    and for solutions.
    create or replace function f(i in number) return number is
    begin
      return i;
    end;
    declare
      l_start   number;
      l_elapsed number;
      n number;
    begin
      select to_char(sysdate, 'sssssss')
        into l_start
        from dual;
      for i in 1 .. 20 loop
        select max(rownum)
          into n
          from t_tdz12_a0090;
      end loop;
      select to_char(sysdate, 'sssssss') - l_start
        into l_elapsed
        from dual;
      dbms_output.put_line('first: '||l_elapsed);
      select to_char(sysdate, 'sssssss')
        into l_start
        from dual;
      for i in 1 .. 20 loop
        select max(sqrt(rownum))
          into n
          from t_tdz12_a0090;
      end loop;
      select to_char(sysdate, 'sssssss') - l_start
        into l_elapsed
        from dual;
      dbms_output.put_line('second: '||l_elapsed);
      select to_char(sysdate, 'sssssss')
        into l_start
        from dual;
      for i in 1 .. 20 loop
        select max(f(rownum))
          into n
          from t_tdz12_a0090;
      end loop;
      select to_char(sysdate, 'sssssss') - l_start
        into l_elapsed
        from dual;
      dbms_output.put_line('third: '||l_elapsed);
    end;
    Results:
       first: 303
       second: 1051
       third: 1515
    Kind regards
    Taoufik

    I find that inline SQL is bad for performance but
    good to simplify SQL. I keep thinking that it should
    be possible somehow to use a function to improve
    performance but have never seen that happen.inline SQL is only bad for performance if the database design (table structure, indexes etc.) is poor or the way the SQL is written is poor.
    Context switching between SQL and PL/SQL for a User defined function is definitely a way to slow down performance.
    Obviously built-in Oracle functions are going to be quicker than User-defined functions because they are written into the SQL and PL/SQL engines and are optimized for the internals of those engines.
    There are a few things you can do to improve function
    performance, shaving microseconds off execution time.
    Consider using the NOCOPY hints for your parameters
    to use pointers instead of copying values. NOCOPY
    is a hint rather than a directive so it may or may
    not work. Optimize any SQL in the called function.
    Don't do anything in loops that does not have to be
    done inside a loop.Well, yes, but it's even better to keep all processing in SQL where possible and only resort to PL/SQL when absolutely necessary.
    The on-line documentation has suggested that using a
    DETERMINISTIC function can improve performance but I
    have not been able to demonstrate this and there are
    notes in Metalink suggesting that this does not
    happen. My experience is that DETERMINISTIC
    functions always get executed. There's supposed to
    be a feature in 11g that acually caches function
    return values.Deterministic functions will work well if used in conjunction with a function based index. That can improve access times when querying data on the function results.
    You can use DBMS_PROFILER to get run-time statistics
    for each line of your function as it is executed to
    help tune it.Or code it as SQL. ;)

  • Can we change physical SQL generated by BI Server?

    Hello gurus:
    I have a doubt. How can we change the physical SQL generated by OBIEE?
    is there any way we can modify the SQL Cost?
    Let me know.
    Thanks.
    Vinay R.

    So If I have a View and a physical table, so View will have lower cost compared to Table?
    Can you explain a bit more about this?There is absolutely no way we can estimate the cost of a view or a table we don't know. There are too many variables so your question is too wide and unprecise. Generally speaking views that select from only one table do not add any extra cost to the execution plan, Oracle will convert them to the underlaying table. Having said the usual reason to have a view is to join different tables and apply some data filtering conditions so it will depend a lot on what kind of query you are running.

  • Any difference between distinct and aggregate function in sql query cost???

    Hi,
    I have executed many sql stmts patterns- such as:
    a) using a single table
    b) using two tables, using simple joins or outer joins
    but i have not noticed any difference in sql stmts in cost and in execution plan....
    Anyway, my colleague insists on that using aggregate function is less costly compared to
    distinct....(something i have not confirmed, that's why i beleive that they are exactly the same...)
    For the above reffered 1st sql pattern.. we could for example use
    select distinct deptno
    from emp
    select count(*), deptno
    from emp
    group by deptno select distinct owner, object_type from all_objects
    select count(*), owner, object_type from all_objects
    group by owner, object_typeHave you found any difference between the two ever...????
    Note: I use Ora DB 10g v2.
    Thank you,
    Sim

    distinct and aggregate function are for different uses and may give same result but if u r using aggregate function to get distinct records, it will be expensive...
    ex
    select distinct deptno from scott.dept;
    Statistics
    0 recursive calls
    0 db block gets
    2 consistent gets
    0 physical reads
    0 redo size
    584 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    4 rows processed
    select deptno from scott.emp group by deptno;
    Statistics
    307 recursive calls
    0 db block gets
    60 consistent gets
    6 physical reads
    0 redo size
    576 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    3 rows processed
    Nimish Garg
    Software Developer
    *(Oracle & ASP.NET)*
    Indiamart Intermesh Limited, Noida
    To Get Free Oracle & ASP.NET Code Snippets
    Follow: http://nimishgarg.blogspot.com

  • Does SQL Azure charge cross region traffic cost?

    Say, I have a SQL DB on West US, and have two cloud services, one hosted on West US and the other hosted on East US.
    Assume two cloud service have exactly same read/write throughput on the DB, is the cost same? I am wondering, if East US service cost more since it need cross region traffic?

    Hi,
    Is data transfer between Azure services located within the same region charged?
    No. For example, an Azure SQL database in the same region will not have any additional data transfer costs.
    Is data transfer between Azure services located in two regions charged?
    Yes. Outbound data transfer is charged at the normal rate and inbound data transfer is free.
    Reference :
    http://azure.microsoft.com/en-us/pricing/details/data-transfers/
    Regards,
    Mekh.

  • Improve Efficiency of SQL Query (reducing Hash Match cost)

    I have the following SQL query that only takes 6 seconds to run, but I am trying to get it down to around 3 seconds if possible. I've noticed that there are 3 places in the Execution Plan that have pretty high costs. 1: Hash Match (partial aggregate) - 12%.
    2: Hash Match (inner join) - 36%. 3: Index Scan - 15%.
    I've been researching Hash Match for a couple days now, but I just don't seem to be getting it. I can't seem to figure out how to decrease the cost. I've read that OUTER APPLY is really inefficient and I have two of those in my query, but I haven't been
    able to figure out a way to get the results I need without them. 
    I am fairly new to SQL so I am hoping I can get some help with this.
    SELECT wi.WorkItemID,
    wi.WorkQueueID as WorkQueueID,
    wi.WorkItemTypeID,
    wi.WorkItemIdentifier,
    wi.DisplayIdentifier,
    wi.WorkItemStatusID,
    wi.SiteID,
    wi.AdditionalIdentifier,
    wi.WorkQueueDescription,
    wi.WorkItemTypeDescription,
    wi.WorkQueueCategoryDescription,
    wi.Active,
    wi.CheckedOutOn,
    wi.CheckedOutBy_UserID,
    wi.CheckedOutBy_UserName,
    wi.CheckedOutBy_FirstName,
    wi.CheckedOutBy_LastName,
    wi.CheckedOutBy_FullName,
    wi.CheckedOutBy_Alias,
    b.[Description] as BatchDescription,
    bt.[Description] as BatchType,
    bs.[Description] as PaymentBatchStatus,
    b.PostingDate AS PostingDate,
    b.DepositDate,
    b.BatchDate,
    b.Amount as BatchTotal,
    PostedAmount = ISNULL(PostedPayments.PostedAmount, 0),
    TotalPayments = ISNULL(PostedPayments.PostedAmount, 0), --Supporting legacy views
    TotalVariance = b.Amount - ISNULL(PostedPayments.PostedAmount, 0), -- ISNULL(Payments.TotalPayments, 0),
    PaymentsCount = ISNULL(Payments.PaymentsCount, 0),
    ISNULL(b.ReferenceNumber, '') AS PaymentBatchReferenceNumber,
    b.CreatedOn,
    b.CreatedBy_UserID,
    cbu.FirstName AS CreatedBy_FirstName,
    cbu.LastName AS CreatedBy_LastName,
    cbu.DisplayName AS CreatedBy_DisplayName,
    cbu.Alias AS CreatedBy_Alias,
    cbu.UserName AS CreatedBy_UserName,
    b.LastModifiedOn,
    b.LastModifiedBy_UserID,
    lmbu.FirstName AS LastModifiedBy_FirstName,
    lmbu.LastName AS LastModifiedBy_LastName,
    lmbu.DisplayName AS LastModifiedBy_DisplayName,
    lmbu.Alias AS LastModifiedBy_Alias,
    lmbu.UserName AS LastModifiedBy_UserName,
    0 AS VisitID, --Payment work items are not associated with VisitID, but it is a PK field on all the Work Queue view models...for now...
    0 AS RCMPatientID, --Payment work items are not associated with RCMPatientID, but it is a PK field on all the Work Queue view models...for now...
    0 AS PatientID
    FROM Account.PaymentBatch AS b (NOLOCK)
    INNER JOIN ViewManager.WorkItems AS wi (NOLOCK)
    ON wi.WorkitemIdentifier = b.PaymentBatchID
    AND wi.WorkItemTypeID = 3
    INNER JOIN Account.PaymentBatchStatus AS bs (NOLOCK)
    ON b.PaymentBatchStatusID = bs.PaymentBatchStatusID
    LEFT JOIN Account.PaymentBatchType bt (NOLOCK)
    ON b.PaymentBatchTypeID = bt.PaymentBatchTypeID
    INNER JOIN ViewManager.[User] AS cbu (NOLOCK)
    ON b.CreatedBy_UserID = cbu.UserID
    INNER JOIN ViewManager.[User] AS lmbu (NOLOCK)
    ON b.LastModifiedBy_UserID = lmbu.UserID
    LEFT JOIN (
    SELECT p.PaymentBatchID
    , SUM(p.Amount) AS TotalPayments
    , COUNT(0) AS PaymentsCount
    FROM Account.Payment AS p (NOLOCK)
    WHERE p.PaymentTypeID = 1
    AND ISNULL(p.Voided, 0) = 0
    GROUP BY p.PaymentBatchID
    ) AS Payments ON b.PaymentBatchID = Payments.PaymentBatchID
    LEFT JOIN (
    SELECT p.PaymentBatchID
    , SUM(pa.Amount) AS PostedAmount
    FROM Account.Payment AS p (NOLOCK)
    INNER JOIN Account.PaymentAllocation AS PA (NOLOCK)
    ON p.PaymentID = pa.PaymentID
    AND (pa.AllocationTypeID = 101 OR pa.AllocationTypeID = 111)
    WHERE p.PaymentTypeID = 1
    AND ISNULL(p.Voided, 0) = 0
    GROUP BY p.PaymentBatchID
    ) AS PostedPayments ON b.PaymentBatchID = PostedPayments.PaymentBatchID
    OUTER APPLY (
    SELECT
    P.PaymentBatchID,
    SUM(CASE WHEN P.PaymentTypeID = 1 THEN 1 ELSE 0 END) as PaymentsCount --only count regular payments not adjustments
    FROM
    Account.Payment p (NOLOCK)
    WHERE
    p.PaymentBatchID = b.PaymentBatchID
    AND p.PaymentTypeID IN (1,2) AND ISNULL(p.Voided, 0)= 0
    GROUP BY
    P.PaymentBatchID
    ) payments
    OUTER APPLY (
    SELECT
    P.PaymentBatchID,
    SUM(pa.Amount) as PostedAmount
    FROM
    Account.PaymentAllocation pa (NOLOCK)
    INNER JOIN
    Account.Payment p (NOLOCK) ON pa.PaymentID = p.PaymentID
    INNER JOIN
    Account.AllocationType t (NOLOCK) ON pa.AllocationTypeID = t.AllocationTypeID
    WHERE
    p.PaymentBatchID = b.PaymentBatchID
    AND p.PaymentTypeID IN (1,2)
    AND ISNULL(p.Voided, 0)= 0
    --AND (t.Credit = 0
    --OR (t.Credit <> 0 And Offset_PaymentAllocationID IS NULL AND (SELECT COUNT(1) FROM Account.paymentAllocation pa2 (NOLOCK)
    -- WHERE pa2.PaymentID = pa.PaymentID AND pa2.AllocationTypeID IN (101, 111)
    -- AND pa2.Offset_PaymentAllocationID IS NULL) > 0))
    GROUP BY
    P.PaymentBatchID
    ) PostedPayments

    The percentages you see are only estimates and may not necessarily not reflect where the real bottleneck is. Particularly, if it is due to a misestimate somewhere.
    To be able to help you improve the performance, we need to see the CREATE TABLE and CREATE INDEX statements for the tables. We also need to see the actual query plan. (In XML format, not a screen shot.) Posting all this here is not practical, but you could
    upload it somewhere. (Dropbox, Google Drive etc.)
    Be very careful with the NOLOCK hint. Using the NOLOCK hint casually lead to transient erratic behaviour which is very difficult to understand. Using it consistenly through a query like you do, is definitely bad practice.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SQL Developer Data Modeler: free of licensing costs like SQL Developer?

    Sue and SQL Developer Team,
    At what release did Data Modeler become free? And by "free" does this mean free for development only? Or free of cost for any and all production use? From what I recall, early releases of Data Modeler were "free" to download but required for-pay licensing to use.
    http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html
    "SQL Developer Data Modeler is a free data modeling and design tool..."
    If there is a license fee for Data Modeler, then the above statement would be as misleading as one like the following:
    "Oracle Enterprise 11g is a free database..."
    Free to download, yes. Not free for production usage.

    Evidently the answer is "yes" this time. Free as in free of licensing costs:
    http://blogs.oracle.com/OTechMusings/2010/09/free_as_in_beer_data_modeler.html
    http://www.oracle.com/technetwork/developer-tools/datamodeler/pricing-faq-101047.html

  • SQL 문장이 RULE 에서 COST-BASED로 전환되는 경우

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-28
    SQL 문장이 RULE에서 COST-BASED로 전환되는 경우
    ==============================================
    PURPOSE
    SQL statement 문장이 자동으로 cost-based mode로 전환되는 경우에 대해
    알아보자.
    Explanation
    Rule-based mode에서 sql statement를 실행하더라도 Optimizer에 의해
    cost-based mode로 전환되는 경우가 있다.
    이런 경우는 해당 SQL이 아래와 같은 경우로 사용되는 경우 가능하다.
    - Partitioned tables
    - Index-organized tables
    - Reverse key indexes
    - Function-based indexes
    - SAMPLE clauses in a SELECT statement
    - Parallel execution and parallel DML
    - Star transformations
    - Star joins
    - Extensible optimizer
    - Query rewrite (materialized views)
    - Progress meter
    - Hash joins
    - Bitmap indexes
    - Partition views (release 7.3)
    - Hint (RULE 또는 DRIVING_SITE제외한 Hint가 왔을경우)
    - FIRST_ROWS,ALL_ROWS Optimizer의 경우는 통계정보가 없어도 CBO로 동작
    - TABLE 또는 INDEX에 Parallel degree가 설정되어 있거나,
    INSTANCE가 설정되어 있는 경우(DEFAULT도 해당)
    - Table에 domain index(Text index등) 이 생성되어 있는 경우

  • Sql query runing slow with lower cost

    Hello
    I am working on Oracle 11g and AIX.I have one sql query which is ruing slow as reported by user.
    When i am commenting few line of code it is runing fast.
    I noticed that the execution plan cost of first query is less and second ...as cost of sccond qurey is more.But users are say it is runing fast.
    How it can possiable ?
    Any idea why second query is runing fast after commenting few columns in select and group by clause.
    Query 1
    SELECT PH.CTRL_NBR, PD.SEQ_NBR,PH.CNTRY,PH.SHIP_DATE, PHI.WAVE_NBR, PD.ID, PD.QTY,
    IM.PACK_QTY,
    IM.UNIT_VOL,
    IM.PACK_QTY,
    MAX(CD.PACK_QTY) AS CASE_QTY,
    IM.UNIT_WT
    FROM HDR PH,
    HDR_INTRNL PHI,
    DTL PD,
    HDR CH,
    CASEDTL CD,
    IMASTER IM
    WHERE PH.CTRL_NBR = PHI.CTRL_NBR
    AND PD.CTRL_NBR = PH.CTRL_NBR
    AND PD.QTY > 0
    AND SUBSTR(CD.ID, 1, 9) = SUBSTR(PD.ID, 1, 9)
    AND CD.CASENBR = CH.CASENBR
    AND CH.STAT_CODE BETWEEN '10' AND '90'
    AND IM.ID = PD.ID
    AND PHI.WAVE_NBR='EL57893'
    GROUP BY PH.CTRL_NBR, PD.SEQ_NBR, PH.CNTRY, PH.SHIP_DATE, PHI.WAVE_NBR, PD.ID, PD.QTY,
    IM.PACK_QTY,
    IM.UNITVOL,
    IM.UNITWT,
    IM.PACK_QTY
    Query 2 .
    SELECT PH.CTRL_NBR,
    PD.SEQ_NBR,
    PH.CNTRY,
    PH.SHIP_DATE,
    PHI.WAVE_NBR,
    PD.ID,
    PD.QTY,
    -- IM.PACK_QTY,
    -- IM.UNIT_VOL,
    -- IM.PACK_QTY,
    MAX(CD.PACK_QTY) AS CASE_QTY,
    -- IM.UNIT_WT
    FROM HDR PH,
    HDR_INTRNL PHI,
    DTL PD,
    HDR CH,
    CASEDTL CD,
    IMASTER IM
    WHERE PH.CTRL_NBR = PHI.CTRL_NBR
    AND PD.CTRL_NBR = PH.CTRL_NBR
    AND PD.QTY > 0
    AND SUBSTR(CD.ID, 1, 9) = SUBSTR(PD.ID, 1, 9)
    AND CD.CASENBR = CH.CASENBR
    AND CH.STAT_CODE BETWEEN '10' AND '90'
    AND IM.ID = PD.ID
    AND PHI.WAVE_NBR='EL57893'
    GROUP BY PH.CTRL_NBR, PD.SEQ_NBR, PH.CNTRY, PH.SHIP_DATE, PHI.WAVE_NBR, PD.ID, PD.QTY,
    --IM.PACK_QTY,
    --IM.UNITVOL,
    --IM.UNITWT,
    --IM.PACK_QTY
    Edited by: oradba11 on Sep 6, 2012 2:11 PM
    Edited by: oradba11 on Sep 6, 2012 2:12 PM

    oradba11 wrote:
    Hello
    I am working on Oracle 11g and AIX.I have one sql query which is ruing slow as reported by user.
    When i am commenting few line of code it is runing fast.
    I noticed that the execution plan cost of first query is less and second ...as cost of sccond qurey is more.But users are say it is runing fast.
    How it can possiable ? <snip>
    This doesn't address your question, but let me suggest that for your own sanity you start brining some formatting to your sql. And for the sanity of those on this forum of whom you expect assistance, you preserve that formatting through the use of the code tags (see the FAQ for details).
    I've done the first one for you, as an example of what I mean
    SELECT
         PH.CTRL_NBR
    ,     PD.SEQ_NBR
    ,     PH.CNTRY
    ,     PH.SHIP_DATE
    ,     PHI.WAVE_NBR
    ,     PD.ID
    ,     PD.QTY
    ,     IM.PACK_QTY
    ,     IM.UNIT_VOL
    ,     IM.PACK_QTY
    ,     MAX(CD.PACK_QTY) AS CASE_QTY
    ,     IM.UNIT_WT
    FROM
         HDR PH
    ,     HDR_INTRNL PHI
    ,     DTL PD
    ,     HDR CH
    ,     CASEDTL CD
    ,     IMASTER IM
    WHERE
         PH.CTRL_NBR = PHI.CTRL_NBR
       AND  PD.CTRL_NBR = PH.CTRL_NBR
       AND  PD.QTY > 0
       AND  SUBSTR(CD.ID, 1, 9) = SUBSTR(PD.ID, 1, 9)
       AND  CD.CASENBR = CH.CASENBR
       AND  CH.STAT_CODE BETWEEN '10' AND '90'
       AND  IM.ID = PD.ID
       AND  PHI.WAVE_NBR='EL57893'
    GROUP BY
         PH.CTRL_NBR
    ,     PD.SEQ_NBR
    ,     PH.CNTRY
    ,     PH.SHIP_DATE
    ,      PHI.WAVE_NBR
    ,      PD.ID
    ,      PD.QTY
    ,      IM.PACK_QTY
    ,     IM.UNITVOL
    ,     IM.UNITWT
    ,     IM.PACK_QTY

  • Query Cost vs SQL statistics

    I have a query and when I execute it it gives a cost of 4000+, however after I optimized it came down to about 300 or so.
    My problem is that after discussing the query with my manager he showed me that the statistics of the first query version where better than the second one (we made sure it was not cached ).
    How come that the second one runs faster and has less cost even if the SQL statistics is worse?
    example:
    first query:
    cost: 4000
    time to run: 2:00+ minutes
    1 recursive call
    1310 consistent gets
    2600 physical reads
    second query:
    cost: 290
    time to run: ~1:30 minutes
    1200 recursive calls
    13000 consistent gets
    30000 physical read

    user8973401 wrote:
    I have a query and when I execute it it gives a cost of 4000+, however after I optimized it came down to about 300 or so.
    How come that the second one runs faster and has less cost even if the SQL statistics is worse?
    example:
    first query:
    cost: 4000
    time to run: 2:00+ minutes
    1 recursive call
    1310 consistent gets
    2600 physical reads
    second query:
    cost: 290
    time to run: ~1:30 minutes
    1200 recursive calls
    13000 consistent gets
    30000 physical readI personally use execution time along with the metrics you listed together to determine when one version of a query is better than another. There are gray areas like the situation you listed where "better" is hard to determine. Another useful metric is CPU, which AUTOTRACE does not list but can be found using V$SQL, traces, or AWR reports. Sometimes the biggest challenge is making sure the result set of the tuned query is the same as the original one ;)
    You listed mixed results which can happen. The run time was 2 minutes vs 1:30 but other metrics were higher. The CPU on the first query might be lower.
    Oh, it DOES happen that a higher-cost version of a query executes faster and uses fewer resources than a lower-cost version. Remember that the COST is an estimate which is usally but not alway accurate

  • Dynamically extracting SQL query costs/statistics at runtime

    Hi there,
    I need to measure the I/O cost (such as 'physical reads')of a SQL statement at runtime. What I'm looking to do is to test the total cost of a number of queries in a java program. So I need to get the cost automatically and add them up.
    There are a few tools that can give statistics on I/O cost, such as autotrace, but I need to collect these information using Java through JDBC.
    Any pointers?
    Also I am a little confused about the terms used in oralce. What I need is the number of db blocks accessed by the query, either from buffer or disk. Are these referred as 'logical read' and 'physical read' respectively? Or are 'logical read' and 'physical read' just mean the number of reads?
    Thank you,

    Logical and consistent reads are against the DB Buffers, Physical reads are disk io. When evaluating the cost of a query, pay attention to an abnormally high number of logical reads. This is usually an indicator of a poorly performing query ie a correlated sub-query.

  • SQL tuning or a simple comprise between time and cost?

    Hi,
    What I understand is that SQL tuning is a simple comprise between time and cost. Objectives of SQL tuning include:
    Reduce Cost
    Reduce Time
    Better Results
    Is it right and correct?
    Adith

    NO, WRONG.
    reducing COST is meaningless, because COST is meaningless. It's used internally by the optimizer to weigh the cost/benefits of different execution plans (access methods) for a single query. cost CANNOT be compared across queries. and if you add a hint to a sql, then it becomes a different sql (proven by looking in v$sql) so you cannot compare costs. in fact, when you provide a hint, oracle artificially lowers the cost associated with the hinted action in order to make that action look better to the optimizer, helping it to be choosen.
    reduce time. reduce io. reduce memory usage. that's it.

  • Least Cost Routing in SQL 2008

    Anybody tried stuff like least cost routing in sql 2008? I have a small road network represented by linestrings in sql, and I'd like to find the shortest (or any other) route between two points. Anyone have some links to whitepapers... tutorials maybe?

    I would like to perform routing in MS SQL 2012 which is going to be similar like pg routing ?
    Actually in pgrouting I'll do the following steps to create a routable network.
    1)
    create a database routing with template "template_routing"
    2)
    create table "road_network" with following constraints
    CHECK (st_ndims(the_geom) = 2)
    CHECK (geometrytype(the_geom) = 'MULTILINESTRING'::text OR the_geom IS NULL)
    CHECK (st_srid(the_geom) = 4030)
    Then
    -- @ CREATE INDEX FOR THE ROAD TABLE -------------------- IMPORTANT
    CREATE INDEX spatialindex_road
     ON road_network
     USING gist
     (the_geom);
    3)
    Perform following queries
    ALTER TABLE road_network ADD COLUMN "source" integer;
    ALTER TABLE road_network ADD COLUMN "target" integer;
    SELECT assign_vertex_id('road_network', 0.00001, 'the_geom', 'gid');
    CREATE INDEX source_idx ON road_network("source");
    CREATE INDEX target_idx ON road_network("target");
    ALTER TABLE road_network  ADD COLUMN length double precision;
    UPDATE road_network  SET length = length(the_geom);
    ALTER TABLE road_network  ADD COLUMN reverse_cost double precision;
    UPDATE road_network  SET reverse_cost = length;
    ALTER TABLE road_network  ADD COLUMN x1 double precision;
    ALTER TABLE road_network  ADD COLUMN y1 double precision;
    ALTER TABLE road_network  ADD COLUMN x2 double precision;
    ALTER TABLE road_network  ADD COLUMN y2 double precision;
    UPDATE road_network  SET x1 = x(ST_PointN(the_geom, 1));
    UPDATE road_network  SET y1 = y(ST_PointN(the_geom, 1));
    UPDATE road_network  SET x2 = x(ST_PointN(the_geom, ST_NumPoints(the_geom)));
    UPDATE road_network  SET y2 = y(ST_PointN(the_geom, ST_NumPoints(the_geom)));
    alter table road_network add column cost double precision default 0;
    update road_network set cost=0.1 where type='NH';
    update road_network set cost=0.2 where type='SH';
    update road_network set cost=0.3 where type='major';
    update road_network set cost=0.4 where type='minor';
    update road_network set cost=1.2 where type='colony';
    update road_network set cost=0.8 where type='third';
    4)
    Now network table created
    To check 
    Run: 
    run this query on ur postgreSQL....
    assign table name name and create a table
    CREATE TABLE shortest_path_astar_table_3(gid int4) with oids;
    SELECT AddGeometryColumn( 'shortest_path_astar_table_3', 'the_geom', 4030, 'MULTILINESTRING', 2 );
    INSERT INTO shortest_path_astar_table_3(the_geom) 
    SELECT the_geom FROM astar_sp_directed('road_network',37,43,true,true);
    Open "shortest_path_astar_table_3" on QGIS and check the path
    Is there any similar way to perform similar queries in SQL ?

Maybe you are looking for