QUERY EXECUTION START TIME

a session is executing a query.I want to know at what time the query has started execution.Its a long running query and not yet finished execution?
Thankx..

hi user,
try with this sql.
CREATE OR REPLACE VIEW time ("SCHEMA",
                                    ora_user,
                                    nt_user,
                                    SID,
                                    serial#,
                                    machine,
                                    status,
                                    exec_time,
                                    MIN,
                                    logon_time,
                                    lockwait,
                                    TYPE,
                                    program
AS
   SELECT   SUBSTR (schemaname, 1, 10) SCHEMA,
            SUBSTR (username, 1, 10) ora_user, SUBSTR (osuser, 1, 10) nt_user,
            SID, serial#, SUBSTR (machine, 1, 30) machine,
            SUBSTR (status, 1, 10) status,
            TO_CHAR (FLOOR (last_call_et / 3600), '990') time,
            TO_CHAR (FLOOR (MOD (last_call_et, 3600) / 60), '990') MIN,logon_time,
            lockwait, TYPE, program
       FROM v_$session
      WHERE username = ' '

Similar Messages

  • Query Execution/Elapsed Time and Oracle Data Blocks

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T1 ( x char(2000));
    So 3 tables are created in this way i.e. T1,T2 and T3.
    T1 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T2 = I created in a Tablespace with Blocksize 16k.
    T3 = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has approx. 500 rows (So, table sizes are same in all the cases to test Query execution time ). As these 3 tables are created under different data block sizes so the ALLOCATED no. of data blocks are different in all cases.
    T1  =   8k  = 256 Blocks =  00:00:04.76 (query execution time/elapsed time)
    T2  = 16k=121 Blocks =  00:00:04.64
    T3 =   4k =  490 Blocks =  00:00:04.91
    Table Access is FULL i.e. I have used select * from table_name; in all 3 cases. No Index nothing.
    My Question is why query execution time is nearly the same in all 3 cases because Oracle has to read all the data blocks in each case to fetch the records and there is a much difference in the allocated no. of blocks ???
    In 4k block size example, Oracle has to read just 121 blocks and it's taking nearly the same time as it's taking to read 490 blocks???
    This is just 1 example of different data blocks. I have around 40 tables in each block size tablespace and the result are nearly the same. It's very strange for me because there is a much difference in the no. of allocated blocks but execution time is almost the same, only difference in milliseconds.
    I'll highly appreciate the expert opinions.
    Bundle of thanks in advance.
    Best Regards,

    Hi Chris,
    No I'm not using separate databases, it's 8k database with non-standard blocksizes of 16k and 4k.
    Actually I wanted to test the Elapsed time of these 3 tables, so for that I tried to create the same size
    tables.
    And how I equalize these is like I have created one column table with char(2000).
    555 MB is the figure I wanted to use for these 3 tables ( no special figure, just to make it bigger than the
    RAM used for my db at the db startup to be sure of not retrieving the records from cache).
    so row size with overhead is 2006 * 290,000 rows = 581740000(bytes) / 1024 = 568105KB / 1024 = 555MB.
    Through this math calculation I thought It will be the total table size. So I Created the same no. of rows in 3 blocksizes.
    If it's wrong then what a mes because I was calculating tables sizes in the same way from the last few months.
    Can you please explain a little how you found out the tables sizes in different block sizes.Though I understood how you
    calculated size in MB from these 3 block sizes
    T8K =97177 BLOCKS=759MB *( 97177*8 = 777416KB / 1024 = 759MB )*
    T16K=41639 BLOCKS=650MB
    BT4K=293656 BLOCKS=1147MB
    For me it's new to calculate the size of a table. Can you please tell me then how many rows I can create in each of
    these 3 tables to make them equal in MB to test for elapsed time.
    Then I'll again run my test and put the results here. Because If I've wrongly calculated table sizes then there is no need to talk about elapsed time. First I must equalize the table sizes properly.
    SQL> select sum(bytes)/1024/1024 "Size in MB" from dba_segments> 2 where segment_name = 'T16K';
    Size in MB
    655
    Is above SQL is correct to calculate the size or is it the correct alternative way to your method of calculating the size??
    I created the same table again with everything same and the result is :
    SQL> select num_rows,blocks from user_tables where table_name = 'T16K';NUM_ROWS BLOCKS
    290000 41703
    64 more blocks are allocated this time so may be that's y it's showing total size of 655 instead of 650.
    Thanks alot for your help.
    Best Regards,
    KAm
    Edited by: kam555 on Nov 20, 2009 5:57 PM

  • Query execution time estimation....

    Hi All,
    Is it possible to estimate query execution time using explain plan?
    Thanks in advance,
    Santosh.

    The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
    Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
    And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
    Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Data Driven Subscriptions Error - the query processor could not start the necessary thread resources for parallel query execution

    Hi,
    We are getting the following error when certain data driven subscriptions are fired off: "the query processor could not start the necessary thread resources for parallel query execution".  I've read other posts that have the same error, and
    the solution usually involves adjusting MaxDOP to limit the number of queries that are fired off in parallel.  
    Unfortunately, we cannot change this setting on our server for performance reasons (outside of data driven subscriptions, it negatively impacts our ETL processing times).  We tried putting query hints like "OPTION (MAXDOP 2);" in the reports
    that are causing the error, but it did not resolve the problem.
    Are there any settings within Reporting Services that can be adjusted to limit the number of subscriptions that get fired off in parallel?
    Any help is appreciated - thanks!

    Yes, that is correct.  It's a painful problem, because you don't know which specific subscription failed. For example, we have a data driven subscription that sends out about 800 emails. Lately, we've been having a handful of them fail. You don't know
    which ones out of the 800 failed though, even from the RS log files - all it tells you is that "the
    query processor could not start the necessary thread resources for parallel query execution".
    Thanks, I'll try changing <MaxQueueThreads> and will let you know if it works.
    On a side note: I've noticed that it is only reports with cascading parameters (ex. where parameter 2 is dependent on the selection from parameter 1) that get this error message...

  • Start time of query

    hi all,
    How one can find the start time of a query.
    v$sql gives only FIRST_LOAD_TIME.

    user11232525 wrote:
    I am currently using set time on but need to right it down before each query....is there a command line for this once i finish my query to display start time of execution?
    If you are using SQL*Plus:
    SET TIME ON -- displays timestamp at every prompt
    SET TIMING ON -- displays statement elapsed timeFor example:
    SQL> SET TIME ON
    12:06:55 SQL> SET TIMING ON
    12:07:03 SQL> SELECT COUNT(*) FROM DBA_OBJECTS
    12:07:18   2  /
      COUNT(*)
         69354
    Elapsed: 00:00:00.81
    12:07:19 SQL> SY.

  • Query execution time

    Dear SCN,
    I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
    Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
    Regards,
    PRK

    Hi Praveen
    Go through this document for performance optimization using BICS connection
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…

  • Calculate report execution start & end time (Not use SE30)

    Dear all Abaper,
    I am writing a report which recorded the execution start & end time
    The following is a part of my coding.
    Data : GV_ST_TIME LIKE SY-UZEIT.  "START TIME
    Data : GV_EN_TIME LIKE SY-UZEIT.  "END TIME
    START-OF-SELECTION.
    GV_ST_TIME = SY-UZEIT.
    END-OF-SELECTION.
    GV_EN_TIME = SY-UZEIT.
    WRITE: / 'START TIME:', GV_ST_TIME.
    WRITE: / 'END TIME:', GV_EN_TIME.
    Suppose the report has executed 5 second
    from time : 00:00:00 to 00:00:05.
    But in the output screen, the start time and end time always display the same time(00:00:05) if following the above coding.
    However, if in the debugging mode, the start time and
    end time will be different and really counted.
    Does anyone know the reason and how to fix it if I don't wanna use SE30?
    Thx in advance~

    Hi boris,
    1. minor problem
    2. we have to REFRESH the time value
    3. using GET TIME.
       (Then it will work correctly)
    4.
    START-OF-SELECTION.
    <b>GET TIME.</b>
    GV_ST_TIME = SY-UZEIT.
    END-OF-SELECTION.
    <b>GET TIME.</b>
    GV_EN_TIME = SY-UZEIT.
    regards,
    amit m.

  • Report execution start and end date/time

    Hi All,
    How can one find execution start and end date/times for all reports? Basically I am looking to see what reports were run on a day, when and how long it took for each to complete.
    Thank you.
    Denis

    Hi,
    The "eul5_batch_reports" holds the data about the scheduled report and if you don't have any so you will not have any data.
    Take a look at "EUL5_QPP_STATS"
    for example:
    select
    qpp.qs_doc_name,
    qpp.qs_doc_details,
    fu.user_name Ran_by,
    qpp.qs_created_date Start_run,
    qpp.qs_doc_owner Doc_owner,
    qpp.qs_num_rows rows_fetch,
    qpp.qs_est_elap_time estimated_time,
    qpp.qs_act_elap_time Run_time,
    qpp.qs_act_cpu_time Cpu_time
    from eul_us.EUL5_QPP_STATS qpp,
    fnd_user fu
    where substr(qpp.qs_created_by,2,10)=fu.user_id
    order by qs_created_date

  • Oracle View that stores the Query execution time

    Hi Gurus
    i m using Oracle 10G in Unix. I wudiold like to know which Data dictionary view stores the execution of a query. If it is not stored then hw to find the query execution time other than (Set timing on) command. What is the use of elapsed time and what is the difference between execution time and elapsed time? How to calculate the execution time of a query.
    THanks
    Ram

    If you have a specific query you're going to run in SQL*Plus, just do
    a 'set timing on' before you execute the query.
    If you've got application SQL coming in from all over the place, you can
    identify specific SQL in V$SQL/ and look at ELAPSED_TIME/EXECUTIONS
    to get an average elapsed time.
    If you've got an application running SQL, and you need to know the
    specific timing of a specific execution (as opposed to an average),
    you can use DBMS_SUPPORT to set trace in the session that your
    application is running in, and then use TkProf to process the resulting
    trace file.

  • Master data taking Long time in Query Execution

    hello Experts
    I have an issue while executing a query.
    The input parameter for the query is the 0customer variable in which when I try to select the value from the tab  Select From the List & goes for single value option or any other option then goes for long time loading the values & then come out with a short dump.
    I want to know why this is happening.
    Please help me out from this.
    Thanks in advance
    Neha

    Thanks to All
    I have checked the info Object - 0Customer.
    The following settings are there in Bex Explorer tab ::
    Query Def. Filter Value Selection  -  Values in Master Data Table
    Query Execution Filter Val. Selectn  -  Only Posted Values for navigation
    Also the Value for display in the object are only 1000. This takes time only in analyzer not in designer.
    What I have to do, plz suggest
    Thanks
    Neha

  • Slow query execution time

    Hi,
    I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
    The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
    The query:
    SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
    FROM MyTable
    WHERE SomeDate= date_entered_by_user  AND SomeString IN ("aaa","bbb")
    GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
    To check I replaced the where clause with
    WHERE SomeDate= date_entered_by_user  AND SomeString = "aaa"No improvements.
    What could be the problem?
    Thank you,
    Lobo

    It's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
    When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
    When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
    The stored execution plan will enable the engine to execute the query faster.
    />

  • How to get the query execution time

    Hi,
    I am new to oracle and I am trying to get the execution time of a query.I tried the command set timing on and executed the query.But,the time it gives me is including the display of results.In my case,I ran a query against 50 million records and it is taking around 5 hours to display all the results.I like to know,how much time it take just to execute the query?Please help.
    Thanks
    Ravi.

    Maybe this way ?
    TEST@db102 SQL> set timing on
    TEST@db102 SQL> set autotrace traceonly
    TEST@db102 SQL> select * from foo;
    332944 rows selected.
    Elapsed: 00:00:29.04
    Execution Plan
    Plan hash value: 1245013993
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   274K|    46M|  1071   (5)| 00:00:13 |
    |   1 |  TABLE ACCESS FULL| FOO  |   274K|    46M|  1071   (5)| 00:00:13 |
    Note
       - dynamic sampling used for this statement
    Statistics
            288  recursive calls
              0  db block gets
          26570  consistent gets
           4975  physical reads
              0  redo size
       35834383  bytes sent via SQL*Net to client
         244537  bytes received via SQL*Net from client
          22198  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
         332944  rows processed
    TEST@db102 SQL>                                                                               

  • New Field at Query execution time.

    Dear All,
    I need some help in Bex User Exit.
    My requirement is that, I require a new field in Bex report and that field is Flag(Change Flag).
    the condition is that IF START DATE (CHAR) IS CURRENT MONTH (QUERY EXECUTION MONTH)
    THEN FLAG SHOUD BE 'Y' ELSE SPACE.
    I cannot put this field in Cube as for this the entire Data load logic need to be changed for this Flag.
    I have created formula variable using Exit. But code is not working .
    Can you please help in this.
    Requirement:
    If START DATE = CURRENT MONTH THEN FLAG = 'Y' ELSE SPACE ' '.
    Thanks v much in advance.
    Regards.

    I am not sure if I got your question right but you can use virtual characteristics in the cube. You do not need to reload data and can assign the new characteristic in it's own dimension or include in any existing dimension. You will need to reactivate update rules though...
    Let me know if it helps...!
    Arun

  • Table defination in datatype size can effect on query execution time.

    Hello Oracle Guru,
    I have one question , suppose I have create one table with more than 100 column
    and i tacke every column datatype varchar2(4000).
    Actual data are in every column not more than 300 character so in this case
    if i execute only select query
    so oracle cursor internaly read up to 4000 character one by one
    or it read character one by one and in last character ex. 300 it will stop there.
    If i reduce varchar2 size 300 instend of 4000 in table defination,
    so is it effect on select query execution time ?
    Thanks in advance.

    When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
    SY.

  • Identifying query execution time

    Hello,
    I would like to know how can I figure out the actual query execution time in Oracle.
    Regards

    Oracle Documentation is your best friend.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2113.htm#i1417057
    ELAPSED_TIME --> Elapsed time (in microseconds) used by this cursor for parsing, executing, and fetching
    Asif Momen
    http://momendba.blogspot.com

Maybe you are looking for