Query is taking too much time

hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regards

I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan

Similar Messages

  • Delete query is taking too much time...

    Hi All,
    Below delete query is taking at least 1hrs. !!!!
    DELETE aux_current_link aux
    WHERE EXISTS (
    SELECT *
    FROM link_trans_cons link2
    WHERE aux.tr_leu_id = link2.tr_leu_id
    AND aux.kind_of_link = link2.kind_of_link
    AND link2.TYPE = 'H');
    table aux_current_link has record - 284279 and has 6 normal index.
    pls help me.
    Subir

    Not even close to enough information.
    Look here to see if you can tune the operation.
    When your query takes too long ...
    But for a delete you need to understand that the indexes need to be maintained, you have 6 of them, that requires effort. ALSO, foreign keys need to be checked to make sure you don't violate any enabled foreign keys (so maybe you need an index on a table where a column from this table you deleting from is being referenced).
    If you are deleting a HIGH percentage of the table you may be better off doing a create table as select ....query to keep all rows from your table....then drop your current table, rename the new one you made, and add all the indexes to it.
    Otherwise, once you've tuned the operation (query), assuming it can be tuned, it's going to take as long as it needs to take.....
    Message was edited by:
    Tubby

  • Discoverer Query is taking too much time

    Hi,
    I am having performance problems with some queries in Discoverer(Relational). Discoverer is taking around 30 minutes to run the report. But if I run the same query through TOAD it takes only 5 to 6 seconds. Why it is so ?
    Structure of Report:
    The report is using crosstab with 3 dimensions on the left side and 3 dimensions on the page items.
    Why the performance in the discoverer is slow and how can I improve it ?
    Thanks & Kind Regards
    Rana

    Hi all
    Russ' comments are correct. When you use crosstabs or page items, Discoverer has to execute the entire query before it can bring back any results. This is a known factor that should be taken into account when end users create workbooks.
    The following conditions will greatly impact performance:
    1. Crosstabs with many items on the left axis
    2. Multiple Crosstab values
    3. Page items with a large set of values,
    4. Crosstabs or page items that use complex calculations,
    5. Multiple page items
    Thus, users must avoid building worksheets that use too many of the above. As commented previously, this is well documented in the forums and on the Oracle website and should not come up by surprise. If it does then either suitable training has not been given to the end users or Oracle's own end user documentation has not been read. Section 6 of the Discoverer Plus user guide has the following advice:
    Whether you are using Discoverer Plus Relational to perform ad hoc queries, or to create reports for other end users, you want to minimize the time it takes to run queries and reports. By following a few simple design guidelines, you can maximize Discoverer performance.
    Where possible:
    use tabular reports rather than cross-tabular reports
    minimize the number of page items in reports
    avoid wide cross tabular report
    avoid creating reports that return tens of thousands of rows
    provide parameters to reduce the amount of data produced
    minimize the number of worksheets in workbooks
    remove extraneous worksheets from workbooks (especially if end users frequently use Discoverer’s export option, see Notes below)
    Notes:
    When end users export data in Discoverer Plus Relational or Discoverer Viewer, they can export either the current worksheet or all the worksheets. In other words, they cannot selectively choose the worksheets to be exported. Remove extraneous worksheets so that extra data is not included when end users export all worksheets.
    I hope this helps
    Regards
    Michael

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Spatial query with sdo_aggregate_union taking too much time

    Hello friends,
    the following query taking too much time for execution.
    table1 contains around 2000 records.
    table2 contains 124 rows
    SELECT
    table1.id
    , table1.txt
    , table1.id2
    , table1.acti
    , table1.acti
    , table1.geom as geom
    FROM
    table1
    WHERE
    sdo_relate
    table1.geom,
    select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
    from table2
    ,'mask=(ANYINTERACT) querytype=window'
    )='TRUE'
    I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
    Thanks

    Hi Thanks lot for your reply.
    But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
    Let me give you clear picture....
    What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
    For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
    I hope this would help you to understand my query.
    Thanks
    I appreciate your efforts.

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Report is taking too much time when running from parameter form

    Dear All
    I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
    But, If it is running from parameter form it is taking too much time to format report in PDF.
    Please suggest any configuration or setting if anybody is having Idea.
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

  • Auto Invoice Program taking too much time : problem with update sql

    Hi ,
    Oracle db version 11.2.0.3
    Oracle EBS version : 12.1.3
    Though we have a SEV-1 SR with oracle we have not been able to find much success.
    We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it.  I am attaching the explain plan for the for same.  Its an update query. Please guide.
    Plan
    UPDATE STATEMENT  ALL_ROWSCost: 0  Bytes: 124  Cardinality: 1 
      50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
      27 FILTER 
      26 HASH JOIN  Cost: 8,937,633  Bytes: 4,261,258,760  Cardinality: 34,364,990 
      24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413  Bytes: 446,744,870  Cardinality: 34,364,990 
      23 SORT UNIQUE  Cost: 8,618,413  Bytes: 4,042,339,978  Cardinality: 34,364,990 
      22 UNION-ALL 
      9 FILTER 
      8 SORT GROUP BY  Cost: 5,643,052  Bytes: 3,164,892,625  Cardinality: 25,319,141 
      7 HASH JOIN  Cost: 1,640,602  Bytes: 32,460,436,875  Cardinality: 259,683,495 
      1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      6 HASH JOIN  Cost: 853,567  Bytes: 22,544,143,440  Cardinality: 214,706,128 
      4 HASH JOIN  Cost: 536,708  Bytes: 2,357,000,550  Cardinality: 29,835,450 
      2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008  Bytes: 1,163,582,550  Cardinality: 29,835,450 
      3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314  Bytes: 1,193,526,000  Cardinality: 29,838,150 
      5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951  Bytes: 3,123,197,116  Cardinality: 120,122,966 
      21 FILTER 
      20 SORT GROUP BY  Cost: 2,975,360  Bytes: 877,447,353  Cardinality: 9,045,849 
      19 HASH JOIN  Cost: 998,323  Bytes: 17,548,946,769  Cardinality: 180,916,977 
      13 VIEW VIEW AR.index$_join$_027 Cost: 108,438  Bytes: 867,771,256  Cardinality: 78,888,296 
      12 HASH JOIN 
      10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206  Bytes: 867,771,256  Cardinality: 78,888,296 
      11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322  Bytes: 867,771,256  Cardinality: 78,888,296 
      18 HASH JOIN  Cost: 748,497  Bytes: 3,281,713,302  Cardinality: 38,159,457 
      14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      17 HASH JOIN  Cost: 519,713  Bytes: 1,969,317,900  Cardinality: 29,838,150 
      15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822  Bytes: 716,115,600  Cardinality: 29,838,150 
      16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847  Bytes: 1,253,202,300  Cardinality: 29,838,150 
      25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552  Bytes: 5,158,998,615  Cardinality: 46,477,465 
      41 SORT GROUP BY  Bytes: 75  Cardinality: 1 
      40 FILTER 
      39 MERGE JOIN CARTESIAN  Cost: 11  Bytes: 75  Cardinality: 1 
      35 NESTED LOOPS  Cost: 8  Bytes: 50  Cardinality: 1 
      32 NESTED LOOPS  Cost: 5  Bytes: 30  Cardinality: 1 
      29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3  Bytes: 22  Cardinality: 1 
      28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2  Cardinality: 1 
      31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2  Bytes: 133,114,520  Cardinality: 16,639,315 
      30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1  Cardinality: 1 
      34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 20  Cardinality: 1 
      33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2  Cardinality: 1 
      38 BUFFER SORT  Cost: 9  Bytes: 25  Cardinality: 1 
      37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 25  Cardinality: 1 
      36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      49 SORT GROUP BY  Bytes: 48  Cardinality: 1 
      48 FILTER 
      47 NESTED LOOPS 
      45 NESTED LOOPS  Cost: 7  Bytes: 48  Cardinality: 1 
      43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4  Bytes: 20  Cardinality: 1 
      42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3  Cardinality: 1 
      44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 28  Cardinality: 1 
    As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
    Regards

    Hi Paul, My bad. I am sorry I missed it.
    Query as below :
    UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
    I understand that this could be a seeded query but my attempt is to tune it.
    Regards

  • Still the report is taking too much time

    Hi All,
    When i refresh a webi report the report is taken too much time to refresh(open).
    In back end i have checked all the connections, contexts, cardinalities, joins, conditions...etc, in webi i have enabled the the check box 'query stripping'.
    but still the report is taking too much time, i didn't identified the problem?
    Please help me on this.
    Thanks in advance..

    Hi Mark,
    How many queries are there--2
    How many rows are returned--- 2000+
    Are all measures defined with aggregates-- Yes
    What is the array fetch size? (I1000 if it isn't already)

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

Maybe you are looking for