Query Execution Time for a Query causing ORA-1555

dear Gurus
I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
My question -
1. Is it possible to accurately find the query duration time besides the Alert Log file ?

abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
1- Tune the query. The faster a query runs the less likely a 1555 will occur.
2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
HTH -- Mark D Powell --
dear mark
Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
regards
abhishek

Similar Messages

  • Is it possible to limit the execution time for a query?

    I have an application that will run a query to gather statistics. The time window is defined by the user. Since the polling period for data collection varies, it is not possible to say that a large time window will result in a large resultSet. I may have a polling period of 1 minute or a polling period of 1 hour.
    I want to avoid a user executing a query that will consume too many resources and inpact the system's performce in general. Could I stop a query after it takes more that x secs? Is there a way to write an sql statement indicating the max response time? similar to rownum?

    You can also create an Oracle profile with limited resources and assign it to the Oracle account running the queries (this profile will be used for all queries run by the corresponding user). Resources can only specifed in cpu time (not elapsed time) or logical reads.
    See http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96521/users.htm#15451
    and http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_611a.htm#2065932

  • Query to find SQL_ID of statement which causes ORA-1555

    Could you please sent a query to find which SQL_ID/sql stament causes ORA-1555 error.
    Hari

    Look in an AWR report spanning the time frame when it occurred. Number of executions may be blank but the elapsed time will be high.
    You can also find part of the statement and then trace it back.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:1480055079538858::::P11_QUESTION_ID:40115659055475

  • Query execution time

    Dear SCN,
    I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
    Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
    Regards,
    PRK

    Hi Praveen
    Go through this document for performance optimization using BICS connection
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…

  • An error has occurred during report processing. (rsProcessingAborted).. Query execution failed for data set

    Hi All,
    I'm facing a strange problem..
    I've developed few reports. they are working fine in develop environment. after successfull testing they were published on web.
    in web version, all reports are executing for first time.. if I change any of parameters values or without chaning also..
    if I press "View Report"  following error occurs..
    An error has occurred during report processing. (rsProcessingAborted)
    Query execution failed for data set 'dsMLGDB2Odbc'. (rsErrorExecutingCommand)
    For more information about this error navigate to the report server on the local server machine, or enable remote errors
    please suggest any alternative ways to overcome this issue
    thanks in adv.

    in my case the problem is
    one virtual machine is for developers
    other for testers
    in developers i created a report, then save like *.rdl and copy to testers machine, does not work there
    the error what testers get is
    Error during the local report processing.
    Could not find a web-based application at http://developersMachine/AnalyticsReports/DataBaseConnector.rsds
    and the solution is to use alternative url or in some cases http://localhost/

  • Query execution failed for data set

    Hi,
    We are using SQL 2005 server for generating reports.When we ran the reports it taking so much time after some time it shows this error:---
     ReportProcessing.ProcessingAbortedException: An error has occurred during report processing. ---> Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for data set  ---> System.Data.SqlClient.SqlException: A severe error occurred on the current command.  The results, if any, should be discarded.
    Can you help me out.
    Thanks,
       --Amit

    My team is also facing similar problem. The RS trace logs report:
    w3wp!processing!13!9/26/2007-15:31:23:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for data set 'msdb'., ;
     Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for data set 'msdb'. ---> System.Data.SqlClient.SqlException: A severe error occurred on the current command.  The results, if any, should be discarded.
    Operation cancelled by user.
    In our system, Report Server and Database are on different machines. Report Server access database using a service account who has stored proc execute permissions on database.
    Problem comes only if the query execution time exceeds 5 mins. Otherwise the report gets generated successfully.
    I suspected this to be some timeout issue. But I have checked that all timeout settings in rs config files are as default.
    Any pointers?
    Thanks
    puns
    [email protected]

  • Execution time of sql query differing a lot between two computer

    hi
    execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
    customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
    any one has idea for this problem?

    Hi mahdi,
    Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
    In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
    Correlating SQL Server Profiler with Performance Monitor:
    https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
    Regards,
    Elvis Long
    TechNet Community Support

  • Slow query execution time

    Hi,
    I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
    The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
    The query:
    SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
    FROM MyTable
    WHERE SomeDate= date_entered_by_user  AND SomeString IN ("aaa","bbb")
    GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
    To check I replaced the where clause with
    WHERE SomeDate= date_entered_by_user  AND SomeString = "aaa"No improvements.
    What could be the problem?
    Thank you,
    Lobo

    It's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
    When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
    When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
    The stored execution plan will enable the engine to execute the query faster.
    />

  • Query execution time estimation....

    Hi All,
    Is it possible to estimate query execution time using explain plan?
    Thanks in advance,
    Santosh.

    The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
    Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
    And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
    Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to skip existing execution plan for a query

    Hi,
    I want to skip existng execution plan for a query which I am executing often. I dont want it to use the same execution plan everytime. Please let me know if any method is there skip the existing execution plan.
    Thanks in advance.......
    Edited by: 900105 on Dec 1, 2011 4:52 AM

    Change the query so it is syntactically different, but has the same semantics (meaning). That way CBO will reparse it and you might get a new execution plan.
    One simple way to do that is to add a dummy predicate ( 45=45) to the where clause. The predicate must be changed every time the query is executed ( 46=46 , 47=47 ,… ).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • How to get the query execution time

    Hi,
    I am new to oracle and I am trying to get the execution time of a query.I tried the command set timing on and executed the query.But,the time it gives me is including the display of results.In my case,I ran a query against 50 million records and it is taking around 5 hours to display all the results.I like to know,how much time it take just to execute the query?Please help.
    Thanks
    Ravi.

    Maybe this way ?
    TEST@db102 SQL> set timing on
    TEST@db102 SQL> set autotrace traceonly
    TEST@db102 SQL> select * from foo;
    332944 rows selected.
    Elapsed: 00:00:29.04
    Execution Plan
    Plan hash value: 1245013993
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   274K|    46M|  1071   (5)| 00:00:13 |
    |   1 |  TABLE ACCESS FULL| FOO  |   274K|    46M|  1071   (5)| 00:00:13 |
    Note
       - dynamic sampling used for this statement
    Statistics
            288  recursive calls
              0  db block gets
          26570  consistent gets
           4975  physical reads
              0  redo size
       35834383  bytes sent via SQL*Net to client
         244537  bytes received via SQL*Net from client
          22198  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
         332944  rows processed
    TEST@db102 SQL>                                                                               

  • One space  changes query execution time significantly. Why?

    Hello,
    my application works on Oracle DB 10.2. Today it started "hanging" on one a select query, which earlier took 1-2 seconds to execute (I waited for several minutes and then stopped it). The query is executed, using the following code:
    PreparedStatement pst = conn.prepareStatement("select ... ? ... ? ... ? ");
    pst.setInt(1, ...);
    pst.setString(2, ...);
    pst.setInt(3, ...);
    ResustSet rs = pst.executeQuery();
    So, it started hanging on the executeQuery() method call.
    I started experimenting and discovered, that any minimal change to the query (for example, adding 1 adjacent space to the space that is already present) has the result, that query executes in 1-2 seconds again! But if I remove this additional unnecessary space - the query hangs again. This was reproduced with both web application and console application. Computing statistics didn't help.
    Several hours later without any interference everything became OK - execution time of that query became 1-2 seconds. But for some time the system was, in fact, unavailable. Why could it happen, how 1 space could make such a great difference and how to prevent this situation in the future?

    As others already mentioned you might have a problem where for some unknown reason the execution plan changes from fast to bad.
    A completely different idea would be to rewrite the statement, maybe it is possible to find a way where you allways get good plans.
    For example I'm not sure if the subquery is needed. If it is then better use alias names inside the subquery too.
    Suggestion
    select distinct vv.vv_id, vv.value, vv.vocabs_voc_id, vv.com, vv.status, vv.code,
                vv.main_vv_id, vt.vocabs_tree_node_id as node_id
                from vocs_values vv left join vt_vv vt
                on vv.vv_id = vt.vocs_values_vv_id
                connect by prior vv.main_vv_id = vv.vv_id
                start with (vv.vocabs_voc_id = ?
                                and upper(vv.value) like ? 
                                and vv.status = ? ); When looking in more detail I see that you alread used an alias from the outer query inside the subquery. Is this correct?
    I added some other aliases in blue for the subquery to show you the difference.
    Problem?
    select distinct vv.vv_id, vv.value, vv.vocabs_voc_id, vv.com, vv.status, vv.code,
                vv.main_vv_id, vt.vocabs_tree_node_id as node_id
                from vocs_values vv left join vt_vv vt
                on vv.vv_id = vt.vocs_values_vv_id
                connect by prior vv.main_vv_id = vv.vv_id
                start with vv.vv_id in (select {color:blue}vv2.{color}vv_id
                                   from vocs_values {color:blue}vv2{color}
                                   where {color:blue}vv2.{color}vocabs_voc_id = ?
                                   and upper({color:red}vv.{color}value) like ? 
                                   and {color:blue}vv2.{color}status = ? );Shouldn't you replace {color:red}vv.{color} with {color:blue}vv2.{color}?
    Edited by: Sven W. on Nov 5, 2008 1:21 PM
    Edited by: Sven W. on Nov 5, 2008 1:26 PM

  • Table defination in datatype size can effect on query execution time.

    Hello Oracle Guru,
    I have one question , suppose I have create one table with more than 100 column
    and i tacke every column datatype varchar2(4000).
    Actual data are in every column not more than 300 character so in this case
    if i execute only select query
    so oracle cursor internaly read up to 4000 character one by one
    or it read character one by one and in last character ex. 300 it will stop there.
    If i reduce varchar2 size 300 instend of 4000 in table defination,
    so is it effect on select query execution time ?
    Thanks in advance.

    When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
    SY.

  • Identifying query execution time

    Hello,
    I would like to know how can I figure out the actual query execution time in Oracle.
    Regards

    Oracle Documentation is your best friend.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2113.htm#i1417057
    ELAPSED_TIME --> Elapsed time (in microseconds) used by this cursor for parsing, executing, and fetching
    Asif Momen
    http://momendba.blogspot.com

  • How can I reduce BEx Query execution time

    Hi,
    I have a question regarding query execution time in BEx.
    I have a query that takes 45 mins to 1 hour to execute in BEx analyser. This query is run on a daily basis and hence I am keen to reduce the execution time.  Are there any programs or function modules that can help in reducing query execution time?
    Thanks and Regards!

    Hi Sriprakash,
    1.Check if your cube is performance tuned: in the manage cube from RSA1 / performance tab: check if all indexes and statistics are green. Aggregate IDx should as well be.
    2.Condense your cubes regularly
    3. Evaluate the creation of an aggregate with all characteristic used in the query (RSDDV).
    4.Evaluate the creation of a "change run aggregate": based on a standalone NavAttr (without its basic char in the aggr.) but pay attention to the consequent change run when loading master data.
    5. Partition (physically) your cubes systematically when possible (RSDCUBE, menu, partitioning)
    6. Consider logical partitioning (by year or comp_code or ...) and make use of multiproviders in order to keep targets not too big...
    7.Consider creating secondary indexes when reporting on ODS (RSDODS)
    8.Check if the query runtime is due the master data read or the infoprovider itself, or the OLAP processor and/or any other cause in tx ST03N
    9.Consider improving your master reads by creating customized IDX (BITMAP if possible and depending on your data) on master data table and/or attribute SIDs when using NAvs.
    10.Check that your basis team did a good job and have applied the proper DB parameters
    11.Last but not least: fine tune your datamodel precisely.
    hope this will give you an idea.
    Cheers
    Sunil

Maybe you are looking for