Query Execution time - Elapsed time v Actual time taken

Hi All,
I have this scenario where I am querying a single table with the following results. It is a very heavy query in that there are multiple aggregate functions and multiple unions on it. Even if the query is written poorly (i doubt it is) why would the actual
time taken to execute the query be much more than the statistics provided through the following commands?
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
Attached are the stats provided for the relevant query in question.
Table '123456789_TEMP_DATA'. Scan count 178, logical reads 582048, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
   CPU time = 936 ms,  elapsed time = 967 ms.
2014-01-06 17:36:41.383
Now, although the CPU Time/Elapsed time shows that it takes less than a second, it actually takes more than 15 seconds to fetch the results. (This is the actual time that you get on the bottom bar of the Query pane as well.)
What is the reason? Why is it that there is such a big discrepancy between the numbers? How can I improve this situation?
Thanks!

Yes. I am returning a huge number of rows to the client. 
The query is simply against a single table. 
Select
 'First Record',AVG(COLUMN1),STDEV(COLUMN1
),COUNT(COLUMN1)
FROM [TABLE1] WHERE (SOME CONDITION)
UNION ALL
Select  'Second Record',AVG(COLUMN2),STDEV(COLUMN2),COUNT(COLUMN2) FROM [TABLE1]
WHERE (SOME OTHER CONDITION)
Imagine there are 178 records fetched in this manner with 178 UNIONs. The WHERE clause will always change for each SELECT statement.
Now, the question is not so much about the query itself, but why the execution time is actually 15 seconds whilst the SQL STATISTICS show it to be 936ms (<1 second)
Thanks!

Similar Messages

  • X1 DVR Guide time doe not match actual time

    I just installed my X1 platform DVR and everything went smooth, but then I noticed that the TV guide is set for Eastern time while I'm in Pacific.  My DVR display shows the correct time.  How can I fix this, I've been through the menu numerous times without finding anything close to being able to reset.

    Unplugging and rebooting the X1 box will fix this.
    bkshafer wrote:
    I just installed my X1 platform DVR and everything went smooth, but then I noticed that the TV guide is set for Eastern time while I'm in Pacific.  My DVR display shows the correct time.  How can I fix this, I've been through the menu numerous times without finding anything close to being able to reset. 

  • Adding estimation and actual time information to SLFN tickest

    Hello,
    We want to track some basic information about estimation and actual work done in any single ticket.  So, we want to add some fields such as:
    - Number of estimated consulting and programming hours for each ticket
    - Number of actual consulting and programming hours for each ticket.
    Then we want to report on them.
    Is there any easy way to do it??  Can we use "products"? Can someone provide some help in setting up this information??
    Regards
    Esteban Hartzstein

    Hello,
    This is something you can try.
    Go to customizing:
    SAP solution manager / configuration / Scenario specific settings / service desk / service desk / general settings / set up the original screen profile
    Find the combination of screen profile type / transaction type = SRVO / SLFN
    Change the screen profile to SRV_SLFN_2.
    Go to transaction CRMD_ORDER, and create a message of type SLFN.
    You will now notice that a button shows up called 'item details' between the fast entry and  transaction data buttons.
    Click on it.
    There you can add your products, maybe one for planned time and one for actual time (remember to put the actual time before closing the message)
    These fields will show in transaction CRM_DNO_MONITOR.
    if you need to create products you transaction COMMPR01 and if you need to create product hierarchies use COMM_HIERARCHY.
    Rgds.

  • Query execution slow

    Hi Experts,
    I have problem with query execution. It is taking more time to execution.
    Query is like this :
    SELECT   gcc_po.segment1 bc,
             gcc_po.segment2 rc,
             gcc_po.segment3 dept,
             gcc_po.segment4 ACCOUNT,
             gcc_po.segment5 product,
             gcc_po.segment6 project,
             gcc_po.segment7 tbd,
             SUBSTR (pv.vendor_name, 1, 50) vendor_name,
             pv.vendor_id,
             NVL (ph.closed_code, 'OPEN') status,
             ph.cancel_flag,
             ph.vendor_site_id,
             ph.segment1 po_number,
             ph.creation_date po_creation_date,
             pv.segment1 supplier_number,
             pvsa.vendor_site_code,
             ph.currency_code po_curr_code,
             ph.blanket_total_amount,
             NVL (ph.rate, 1) po_rate,
             SUM (DECODE (:p_currency,
                          'FUNCTIONAL', DECODE (:p_func_curr_code,
                                                ph.currency_code, NVL
                                                                (pd.amount_billed,
                                                                 0),
                                                  NVL (pd.amount_billed, 0)
                                                * NVL (ph.rate, 1)
                          NVL (pd.amount_billed, 0)
                         )) amt_vouchered,
             ph.po_header_id poheaderid,
             INITCAP (ph.attribute1) po_type,
             DECODE (ph.attribute8,
                     'ARIBA', DECODE (ph.attribute4,
                                      NULL, ph.attribute4,
                                      ppf.full_name
                     ph.attribute4
                    ) origanator,
             ph.attribute8 phv_attribute8,
             UPPER (ph.attribute4) phv_attribute4
        FROM po_headers ph,
             po_vendors pv,
             po_vendor_sites pvsa,
             po_distributions pd,
             gl_code_combinations gcc_po,
             per_all_people_f ppf
       WHERE ph.segment1 BETWEEN '001002' AND 'IND900714'
         AND ph.vendor_id = pv.vendor_id(+)
         AND ph.vendor_site_id = pvsa.vendor_site_id
         AND ph.po_header_id = pd.po_header_id
         AND gcc_po.code_combination_id = pd.code_combination_id
         AND pv.vendor_id = pvsa.vendor_id
         AND UPPER (ph.attribute4) = ppf.attribute2(+) -- no  index on attributes
         AND ph.creation_date BETWEEN ppf.effective_start_date(+) AND ppf.effective_end_date(+)
    GROUP BY gcc_po.segment1,-- no index on segments
             gcc_po.segment2,
             gcc_po.segment3,
             gcc_po.segment4,
             gcc_po.segment5,
             gcc_po.segment6,
             gcc_po.segment7,
             SUBSTR (pv.vendor_name, 1, 50),
             pv.vendor_id,
             NVL (ph.closed_code, 'OPEN'),
             ph.cancel_flag,
             ph.vendor_site_id,
             ph.segment1,
             ph.creation_date,
             pvsa.attribute7,
             pv.segment1,
             pvsa.vendor_site_code,
             ph.currency_code,
             ph.blanket_total_amount,
             NVL (ph.rate, 1),
             ph.po_header_id,
             INITCAP (ph.attribute1),
             DECODE (ph.attribute8,
                     'ARIBA', DECODE (ph.attribute4,
                                      NULL, ph.attribute4,
                                      ppf.full_name
                     ph.attribute4
             ph.attribute8,
             ph.attribute4Here with out SUM funciton and group by function its execution is fast. if i use this Sum function and Group by function it is taking nearly 45 mins.
    Explain plan for this:
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=ALL_ROWS          1             6364                              
      HASH GROUP BY          1       272       6364                              
        NESTED LOOPS OUTER          1       272       6363                              
          NESTED LOOPS          1       232       6360                              
            NESTED LOOPS          1       192       6358                              
              NESTED LOOPS          1       171       6341                              
                HASH JOIN          1 K     100 K     2455                              
                  TABLE ACCESS FULL     PO_VENDOR_SITES_ALL     1 K     36 K     1683                              
                  TABLE ACCESS FULL     PO_VENDORS     56 K     3 M     770                              
                TABLE ACCESS BY INDEX ROWID     PO_HEADERS_ALL     1       82       53                              
                  INDEX RANGE SCAN     PO_HEADERS_N1     69             2                              
              TABLE ACCESS BY INDEX ROWID     PO_DISTRIBUTIONS_ALL     1       21       17                              
                INDEX RANGE SCAN     PO_DISTRIBUTIONS_N3     76             2                              
            TABLE ACCESS BY INDEX ROWID     GL_CODE_COMBINATIONS     1       40       2                              
              INDEX UNIQUE SCAN     GL_CODE_COMBINATIONS_U1     1             1                              
          TABLE ACCESS BY INDEX ROWID     PER_ALL_PEOPLE_F     1       40       3                              
            INDEX RANGE SCAN     PER_PEOPLE_F_ATT2     2             1                               plz giv me solution for this.....Whihc Hints shall i use in this query.....
    thanks in ADV....

    I have a feeling this will lead us nowhere, but let me try for the last time.
    Tuning a query is not about trying out all available index hints, because there must be one that makes the query fly. It is about diagnosing the query. See what it does and see where time is being spent. Only after you know where time is being spent, then you can effectively do something about it (if it is not tuned already).
    So please read about explain plan, SQL*Trace and tkprof, and start diagnosing where your problem is.
    Regards,
    Rob.

  • F4 Help in Variable Screen during Query Execution

    Hi All,
    We are executing queries through WAD. The F4 help in the variable screen during query execution is taking a lot of time.
    SAP note 661251 suggests changing the F4 mode to M. We need to change the booked value parameter.
    I looked into the standard WAD web template but i'm not seeing any options/ooked value parameter.
    Please suggest where I need to go to change the parameter. Thanks.
    Regards,
    Vivek

    Not much experience in WAD but I think there should be a query in BEx Query Designer.
    There you can choose that characteristics which takes time. Select "Advanced" tab from right hand side.
    Under "Filter Value Selection During Query Execution"  you will get 4 options.
    BTW if you have a query on DSO then creating index ( secondary) on the affected characteristics would also resolve your problem
    Regards
    Anindya

  • Query Execution/Elapsed Time and Oracle Data Blocks

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T1 ( x char(2000));
    So 3 tables are created in this way i.e. T1,T2 and T3.
    T1 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T2 = I created in a Tablespace with Blocksize 16k.
    T3 = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has approx. 500 rows (So, table sizes are same in all the cases to test Query execution time ). As these 3 tables are created under different data block sizes so the ALLOCATED no. of data blocks are different in all cases.
    T1  =   8k  = 256 Blocks =  00:00:04.76 (query execution time/elapsed time)
    T2  = 16k=121 Blocks =  00:00:04.64
    T3 =   4k =  490 Blocks =  00:00:04.91
    Table Access is FULL i.e. I have used select * from table_name; in all 3 cases. No Index nothing.
    My Question is why query execution time is nearly the same in all 3 cases because Oracle has to read all the data blocks in each case to fetch the records and there is a much difference in the allocated no. of blocks ???
    In 4k block size example, Oracle has to read just 121 blocks and it's taking nearly the same time as it's taking to read 490 blocks???
    This is just 1 example of different data blocks. I have around 40 tables in each block size tablespace and the result are nearly the same. It's very strange for me because there is a much difference in the no. of allocated blocks but execution time is almost the same, only difference in milliseconds.
    I'll highly appreciate the expert opinions.
    Bundle of thanks in advance.
    Best Regards,

    Hi Chris,
    No I'm not using separate databases, it's 8k database with non-standard blocksizes of 16k and 4k.
    Actually I wanted to test the Elapsed time of these 3 tables, so for that I tried to create the same size
    tables.
    And how I equalize these is like I have created one column table with char(2000).
    555 MB is the figure I wanted to use for these 3 tables ( no special figure, just to make it bigger than the
    RAM used for my db at the db startup to be sure of not retrieving the records from cache).
    so row size with overhead is 2006 * 290,000 rows = 581740000(bytes) / 1024 = 568105KB / 1024 = 555MB.
    Through this math calculation I thought It will be the total table size. So I Created the same no. of rows in 3 blocksizes.
    If it's wrong then what a mes because I was calculating tables sizes in the same way from the last few months.
    Can you please explain a little how you found out the tables sizes in different block sizes.Though I understood how you
    calculated size in MB from these 3 block sizes
    T8K =97177 BLOCKS=759MB *( 97177*8 = 777416KB / 1024 = 759MB )*
    T16K=41639 BLOCKS=650MB
    BT4K=293656 BLOCKS=1147MB
    For me it's new to calculate the size of a table. Can you please tell me then how many rows I can create in each of
    these 3 tables to make them equal in MB to test for elapsed time.
    Then I'll again run my test and put the results here. Because If I've wrongly calculated table sizes then there is no need to talk about elapsed time. First I must equalize the table sizes properly.
    SQL> select sum(bytes)/1024/1024 "Size in MB" from dba_segments> 2 where segment_name = 'T16K';
    Size in MB
    655
    Is above SQL is correct to calculate the size or is it the correct alternative way to your method of calculating the size??
    I created the same table again with everything same and the result is :
    SQL> select num_rows,blocks from user_tables where table_name = 'T16K';NUM_ROWS BLOCKS
    290000 41703
    64 more blocks are allocated this time so may be that's y it's showing total size of 655 instead of 650.
    Thanks alot for your help.
    Best Regards,
    KAm
    Edited by: kam555 on Nov 20, 2009 5:57 PM

  • Execution time, elapsed time  of an sql query

    Can you please tell me how to get the execution time, elapsed time of an sql query

    user8680248 wrote:
    I am running query in the database
    I would like to know how long the query take the time to completeWhy? That answer can be totally meaningless as the VERY SAME query on the VERY SAME data on the VERY SAME database in the VERY SAME Oracle session can and will show DIFFERENT execution times.
    So why do you want to know a specific query's execution time? What do you expect that to tell you?
    If you mean that you want to know how long an existing query being executed is still going to take - that's usually quite difficult to determine. Oracle does provide a view on so-called long operations. However, only certain factors of a query's execution will trigger that this query is a long operation - and only for those specific queries will there be long operation stats that provide an estimated completion time.
    If your slow and long running query does not show in long operation, then Oracle does not consider it a long operation - it fails to meet the specific criteria and factors required as a long operation. This is not a bug or an error. Simply that your query does not meet the basic requirements to be viewed as a long operation.
    Oracle however provides the developer with the means to create long operations (using PL/SQL). You need to know and do the following:
    a) need to know how many units of work to do (e.g. how many fetches/loop iterations/rows your code will process)
    b) need to know how many units of work thus far done
    c) use the DBMS_APPLICATION_INFO package to create a long operation and continually update the operation with the number of work units thus far done
    It is pretty easy to implement this in PL/SQL processing code (assuming requirements a and b can be met) - and provide long operation stats and estimated completion time for the DBA/operators/users of the database, waiting on your process to complete.

  • Identifying query execution time

    Hello,
    I would like to know how can I figure out the actual query execution time in Oracle.
    Regards

    Oracle Documentation is your best friend.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2113.htm#i1417057
    ELAPSED_TIME --> Elapsed time (in microseconds) used by this cursor for parsing, executing, and fetching
    Asif Momen
    http://momendba.blogspot.com

  • How to get min,max,avg time for query execution?

    Dear Friends,
    In AWR we are getting avg time taken to execute particular query, how can one get min,max time taken by query during number of executions.
    Thanks

    I would run the sql in a cursor for loop, to get a quite reasonable execution time without changing the actual execution plan:
    SQL> show user;
    USER is "HR"
    SQL> set timing on
    SQL> select count(*) from all_objects;
      COUNT(*)
         55565
    Elapsed: 00:00:03.91
    SQL> var p_sql varchar2(200)
    SQL> exec :p_sql := 'select * from all_objects'
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:03.53
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.75
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.73
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.66
    SQL> ---- alter system flush shared_pool;
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.80
    SQL> declare
      2  t1 timestamp := systimestamp;
      3  begin
      4    execute immediate 'begin for c in (' || :p_sql || ') loop null; end loop; end;';
      5    dbms_output.put_line('Elapsed: ' || (systimestamp - t1));
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.64
    SQL>
    https://forums.oracle.com/thread/705536?start=15&tstart=0
    Regards
    Girish Sharma

  • Oracle 11G - Oracle AWR query execution time in report

    I have used AWR tool of oracle 11G. I have exported query historical statistics of production databaser using awrextr.sql and then load the exported dump file using awrload.sql script.
    Then i used awrrpti.sql and awrsqrpi.sql for generating report of sql queries. Every thing is working fine and generated reports are also very helpful, but report does not show the exact time when the query was executed. How can i get the actual time when the query was executed ?
    any help please ?

    If you would have consulted the Oracle Reference Manual to get the view descriptions, you should have your question is a rhetorical one with the answer NO.
    This is because every statement can be executed one or more times, and Oracle would need to keep track of all individual executions.
    I do agree most 'applications' do not use bind variables, and consequently only have unique statements, but Oracle didn't take that into account, and rightly so.
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle View that stores the Query execution time

    Hi Gurus
    i m using Oracle 10G in Unix. I wudiold like to know which Data dictionary view stores the execution of a query. If it is not stored then hw to find the query execution time other than (Set timing on) command. What is the use of elapsed time and what is the difference between execution time and elapsed time? How to calculate the execution time of a query.
    THanks
    Ram

    If you have a specific query you're going to run in SQL*Plus, just do
    a 'set timing on' before you execute the query.
    If you've got application SQL coming in from all over the place, you can
    identify specific SQL in V$SQL/ and look at ELAPSED_TIME/EXECUTIONS
    to get an average elapsed time.
    If you've got an application running SQL, and you need to know the
    specific timing of a specific execution (as opposed to an average),
    you can use DBMS_SUPPORT to set trace in the session that your
    application is running in, and then use TkProf to process the resulting
    trace file.

  • Slow query execution time

    Hi,
    I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
    The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
    The query:
    SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
    FROM MyTable
    WHERE SomeDate= date_entered_by_user  AND SomeString IN ("aaa","bbb")
    GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
    To check I replaced the where clause with
    WHERE SomeDate= date_entered_by_user  AND SomeString = "aaa"No improvements.
    What could be the problem?
    Thank you,
    Lobo

    It's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
    When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
    When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
    The stored execution plan will enable the engine to execute the query faster.
    />

  • Query execution time estimation....

    Hi All,
    Is it possible to estimate query execution time using explain plan?
    Thanks in advance,
    Santosh.

    The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
    Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
    And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
    Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Table defination in datatype size can effect on query execution time.

    Hello Oracle Guru,
    I have one question , suppose I have create one table with more than 100 column
    and i tacke every column datatype varchar2(4000).
    Actual data are in every column not more than 300 character so in this case
    if i execute only select query
    so oracle cursor internaly read up to 4000 character one by one
    or it read character one by one and in last character ex. 300 it will stop there.
    If i reduce varchar2 size 300 instend of 4000 in table defination,
    so is it effect on select query execution time ?
    Thanks in advance.

    When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
    SY.

  • How to get query execution time without running...?

    Hi ,
    I had one requirement .... as follows ......
    i had 3 sql statements . I need to execute only one sql which execution time is very less.
    Can any one help me , how to get query execution time without running that query and without using explain plan..?
    Thanks,
    Rajesh

    Kim Berg Hansen wrote:
    But you have ruled out explain plan for some reason, so I cannot help you.OP might get some answers if query was executed before - but since restart. Check V$SQL dynamic performance view for SQL_TEXT = your query. Then ROUND(ELAPSED_TIME / EXECUTIONS / 1000000) will give you average elapsed time.
    SY.
    Edited by: Solomon Yakobson on Apr 3, 2012 8:44 AM

Maybe you are looking for