Query running long issue

Hi there,
I don’t know what the capabilities are of SQL logging, so I’m wondering if anyone can help.
We have 2 different offices, about 1.5 hours away from eachother. Point A and Point B. People from point B are complaining about lag running queries against a database.
The exact same query that takes 40 minutes to run at Point A, takes over 60 minutes to run at Point B.
Is there a way to profile the SQL queries down to the level of exactly how long each section takes?
I’m thinking specifically:
How long does it take after you hit the f5 button to transfer the query to the server
How long does it take the server to actually process the query
How long does it take to transfer the results back to the client once the results are gathered
We suspect it is a network lag, but we can’t suggest solutions until we come up with metric supporting the “your network is too slow” argument.
Thanks for your help in advance!

Hello,
SQL Profiler can trace the connection session after the client successfully connected (or logged in ) to the SQL Server instance. But it cannot trace the time cost before the login and after logout.
To trace the connection session for a specify user or application, you select the
Security Audit event which contains audit login and audit logout event, and
Sessions.
As for query processing time cost, please select the
T-SQL Batch completed event in the trace. The duration column value is equals to (end time column) -(start time column). This is the query processing time in microseconds.
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support

Similar Messages

  • Select query running long time

    Hi,
    DB version : 10g
    platform : sunos
    My select sql query running long time (more than 20hrs) .Still running .
    Is there any way to find sql query completion time approximately. (Pending time)
    Also is there any possibilities to increase the speed of sql query (already running) like adding hints.
    Please help me on this .
    Thanks

    Hi Sathish thanks for your reply,
    I have already checked in V$SESSION_LONGOPS .But it's showing TIME_REMAINING -->0
    select TOTALWORK,SOFAR,START_TIME,TIME_REMAINING from V$SESSION_LONGOPS where SID='10'
    TOTALWORK      SOFAR START_TIME      TIME_REMAINING
         1099759    1099759 27-JAN-11                    0Any idea ?
    Thanks.

  • Is index range scan the reason for query running long time

    I would like to know whether index range scan is the reason for the query running long time. Below is the explain plan. If so, how to optimise it? Please help
    Operation     Object     COST     CARDINALITY     BYTES
    SELECT STATEMENT ()          413     1000     265000
    COUNT (STOPKEY)                    
    FILTER ()                    
    TABLE ACCESS (BY INDEX ROWID)     ORDERS     413     58720     15560800
    INDEX (RANGE SCAN)     IDX_SERV_PROV_ID     13     411709     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1

    The index range scan means that the optimiser has determined that it is better to read the index rather than perform a full table scan. So in answer to your question - quite possibly but the alternative might take even longer!
    The best thing to do is to review your query and check that you need every table included in the query and that you are accessing the tables via the best route. For example if you can access a table via primary key index that would be better than using a non-unique index. But the best way of reducing the time the query takes to run is to give it less tables (and indexes) to read.
    John Seaman
    http://www.asktheoracle.net

  • Query running long time

    hi
    I'm having a query running for long time, Im new to dba can any one suggest me methods to make it faster it's running now and i have to make it execute it faster
    parallel servers=4, and there are no inactive sessions.
    thanks in advance

    Make a habit of putting the database version in the post
    As i told u before i depends on lot of things not only merge(cartisian ) joins,
    1)It depends on the load the database is having,Was this query running fastly before?if it was running fastly then was the workload same as today?
    2)Any changes done to database recently or the server?
    3)only this query is slow all the queris are slow?
    4)When was database last restarted?
    5)Are u using bind variable in the query?
    6)Is you library cache properly sized?If the query is doing lots of sorts then is your PGA properly sized?
    7)Database buffer cache is properly sized?
    8)How much memory your database is having?
    9)Is your SGA properly fits in your memory or its getting swaped?
    Etc...Etc
    Check all these things
    Regards
    Kaunain

  • Query running longer time

    Hi All,
    when i run the query in Analyzer,it is taking longer time.the query is built on DSO.
    can anyone give me inputs why the query is taking much time
    Thanks in Advance
    Reddy

    Hi,
    Follow this thread to find out how to improve Query performance on ODS.
    ODS Query Performance  
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Hope this helps.
    Thanks,
    JituK

  • Query running long in APEX

    Hi ,
    I am using Apex Version 4.1.1.00.23.  I am running an Interactive report in Apex that is running about15- 20 seconds . I take the query out of the report and run the query  in sqldeveloper and it runs in 4 seconds. Why does it run so much slower in APEX. It is a basic
    Interactive report with one query.  I will send query. Is there a way I can tune through APEX and see why it is taking so MUCH longer ?
    select c.rcn
      ,case when logical_level- (select logical_level from cd_customer where rcn = :P132_RCN) = 1 then '. '
           when logical_level - (select logical_level from cd_customer where  rcn = :P132_RCN) = 2 then '. . '
           when logical_level - (select logical_level from cd_customer where  rcn = :P132_RCN) = 3 then '. . . '
           when logical_level - (select logical_level from cd_customer where  rcn = :P132_RCN) = 4 then '. . . . '
           when logical_level - (select logical_level from cd_customer where  rcn = :P132_RCN) = 5 then '. . . . . '
       end || (logical_level - (select logical_level from cd_customer where  rcn = :P132_RCN)) ||
          ' ' || get_name(c.rcn,'D','1') DName
    ,PHYSICAL_LEVEL - (select physical_level from cd_customer where  RCN = :P132_RCN) "LEVEL"
    , nvl(sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank),0) PGPV
    , countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD)   DistCnt
    , countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt,  logical_lbound,c.rank,
    (select wr.abbreviation from  wd_ranknames wr where wr.rank = c.rank and wr.status=c.status) "rnk_abbrv"
    ,&P132_START_PERIOD,&P132_END_PERIOD ,&P132_RCN
    from cd_customer c
      where :P132_END_PERIOD > (select commission_closed from cd_parameters)  and logical_lbound > 0
    and logical_lbound between (select logical_lbound from cd_customer where rcn = :P132_RCN)
                            and (select logical_rbound from cd_customer where  rcn = :P132_RCN)
    and (logical_level - (select logical_level from cd_customer where rcn = :P132_RCN)) <=:P132_LEVELS                 
    union all
    select c.rcn
      ,case when logical_level- (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 1 then '. '
           when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 2 then '. . '
           when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 3 then '. . . '
           when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 4 then '. . . . '
           when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 5 then '. . . . . '
       end || (logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)) ||
          ' ' || get_name(c.rcn,'D','1') DName
    ,PHYSICAL_LEVEL - (select physical_level from wd_customer where pvperiod = :P132_END_PERIOD AND RCN = :P132_RCN) "LEVEL"
    , sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank) PGPV
    ,countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt
    ,countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt
    ,logical_lbound,c.rank,(select wr.abbreviation from
    wd_ranknames wr where wr.rank = c.rank and wr.status=c.status) "rnk_abbrv"
    ,&P132_START_PERIOD,&P132_END_PERIOD ,&p132_RCN
    from wd_customer c
      where pvperiod = :P132_END_PERIOD and logical_lbound > 0
    and logical_lbound between (select logical_lbound from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)
                            and (select logical_rbound from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)
    and (logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)) <=:P132_LEVELS   

    Sorry, my OCD must have kicked in.  Try removing all those in-line queries, although not an answer to why the report takes longer than elsewhere, it might help.
    SELECT c.rcn ,
      CASE
        WHEN c.logical_level - cv.logical_level = 1  THEN '. '
        WHEN c.logical_level - cv.logical_level = 2  THEN '. . '
        WHEN c.logical_level - cv.logical_level = 3  THEN '. . . '
        WHEN c.logical_level - cv.logical_level = 4  THEN '. . . . '
        WHEN c.logical_level - cv.logical_level = 5  THEN '. . . . . '
      END
      || (c.logical_level - cv.logical_level)
      || ' '
      || get_name(c.rcn,'D','1') DName ,
      c.PHYSICAL_LEVEL - cv.physical_level "LEVEL" ,
      NVL(sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank),0) PGPV ,
      countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt ,
      countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt,
      c.logical_lbound,
      c.rank,
      (SELECT wr.abbreviation
        FROM wd_ranknames wr
        WHERE wr.rank = c.rank
          AND wr.status =c.status ) "rnk_abbrv" ,
      &P132_START_PERIOD,
      &P132_END_PERIOD ,
      &P132_RCN
    FROM cd_customer c,
        cd_customer cv
    WHERE cv.rcn = :P132_RCN
      AND :P132_END_PERIOD > (SELECT commission_closed FROM cd_parameters )
      AND c.logical_lbound > 0
      AND c.logical_lbound BETWEEN cv.logical_lbound AND cv.logical_rbound
      AND (clogical_level - cv.logical_level) <=:P132_LEVELS
    UNION ALL
    SELECT c.rcn ,
      CASE
        WHEN c.logical_level - cv.logical_level = 1 THEN '. '
        WHEN c.logical_level - cv.logical_level = 2 THEN '. . '
        WHEN c.logical_level - cv.logical_level = 3 THEN '. . . '
        WHEN c.logical_level - cv.logical_level = 4 THEN '. . . . '
        WHEN c.logical_level - cv.logical_level = 5 THEN '. . . . . '
      END
      || (c.logical_level - cv.logical_level )
      || ' '
      || get_name(c.rcn,'D','1') DName ,
      PHYSICAL_LEVEL - cv.physical_level "LEVEL" ,
      sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank) PGPV ,
      countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt ,
      countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt ,
      c.logical_lbound,
      c.rank,
      (SELECT wr.abbreviation
      FROM wd_ranknames wr
      WHERE wr.rank = c.rank
      AND wr.status =c.status
      ) "rnk_abbrv" ,
      &P132_START_PERIOD,
      &P132_END_PERIOD ,
      &p132_RCN
    FROM wd_customer c,
      wd_customer cv
    WHERE cv.pvperiod    = :P132_END_PERIOD
    AND cv.rcn          = :P132_RCN
    AND c.pvperiod      = :P132_END_PERIOD
    AND c.logical_lbound > 0
    AND c.logical_lbound BETWEEN cv.logical_lbound AND cv.logical_rbound
    AND (logical_level - cv.logical_level ) <= :P132_LEVELS

  • No data query runs longer time

    I have a table with 50 million records, partitioned based on date.
    if i do the query select * from test where trade_date = '01-mar-2010' brings
    the records in less than a second. works perfect
    but if there is no data for any given date in the table, the query takes more than 1 to 2 minute to completed.
    why the query takes that longer to comes back with NO DATA?
    comments are appreciated..
    note:
    i use 11g.
    statistics are collected.

    hello,
    the trade_date range partitioned..and every day the table will have data exception weekends and holidays..
    PARTITION BY RANGE (transaction_DT)
    PARTITION P001 VALUES LESS THAN (TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P002 VALUES LESS THAN (TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P003 VALUES LESS THAN (TO_DATE(' 2004-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P004 VALUES LESS THAN (TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P005 VALUES LESS THAN (TO_DATE(' 2006-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P006 VALUES LESS THAN (TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P007 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P008 VALUES LESS THAN (TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P009 VALUES LESS THAN (TO_DATE(' 2010-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P010 VALUES LESS THAN (TO_DATE(' 2011-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P011 VALUES LESS THAN (TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P012 VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P013 VALUES LESS THAN (TO_DATE(' 2014-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P014 VALUES LESS THAN (TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P015 VALUES LESS THAN (TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P016 VALUES LESS THAN (TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P017 VALUES LESS THAN (TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P018 VALUES LESS THAN (TO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P019 VALUES LESS THAN (TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P020 VALUES LESS THAN (TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P021 VALUES LESS THAN (TO_DATE(' 2022-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P022 VALUES LESS THAN (TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P023 VALUES LESS THAN (TO_DATE(' 2024-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P024 VALUES LESS THAN (TO_DATE(' 2025-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
    PARTITION P025 VALUES LESS THAN (TO_DATE(' 9999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    Edited by: user520824 on Sep 1, 2010 12:12 PM

  • Query runs long

    here is the scenario,
    insert into xxxx
    select *
    from MView1 a, Table1 b, table2 C,
    Mview2 D
    where a.source_id= b.source_id and a.code = b.code and a.number = b.number
    AND C.SOURCE_ID= A.SOURCE_ID AND A.ID=C.ID
    AND A.SOURCE_ID = D.SOURCE_ID(+) AND A.ID = D.ID(+) AND
    A.IT_ID=D.IT_ID(+);
    the query usually takes 20 mins to complete, but nowadays it is not running for ever, Here is the Explain plan,
    PLAN_TABLE_OUTPUT
    Plan hash value: 2900817873
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 1 | 174 | 1385 (1)| 00:00:17 | | | |
    | 1 | PX COORDINATOR | | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 174 | 1385 (1)| 00:00:17 | Q1,01 | P->S | QC (RAND) |
    | 3 | NESTED LOOPS OUTER | | 1 | 174 | 1385 (1)| 00:00:17 | Q1,01 | PCWP | |
    | 4 | NESTED LOOPS | | 1 | 140 | 1385 (2)| 00:00:17 | Q1,01 | PCWP | |
    | 5 | MERGE JOIN CARTESIAN | | 1 | 42 | 141 (0)| 00:00:02 | Q1,01 | PCWP | |
    | 6 | SORT JOIN | | | | | | Q1,01 | PCWP | |
    | 7 | PX RECEIVE | | 1 | 33 | 138 (0)| 00:00:02 | Q1,01 | PCWP | |
    | 8 | PX SEND BROADCAST | :TQ10000 | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | P->P | BROADCAST |
    | 9 | PX BLOCK ITERATOR | | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | PCWC | |
    | 10 | TABLE ACCESS FULL | TABLE1                | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | PCWP | |
    | 11 | BUFFER SORT | | 364 | 3276 | 3 (0)| 00:00:01 | Q1,01 | PCWP | |
    | 12 | PX BLOCK ITERATOR | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWC | |
    | 13 | TABLE ACCESS FULL | TABLE2           | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
    |* 14 | MAT_VIEW ACCESS BY INDEX ROWID | MVIEW1      | 1 | 98 | 1385 (2)| 00:00:17 | Q1,01 | PCWP | |
    | 15 | BITMAP CONVERSION TO ROWIDS | | | | | | Q1,01 | PCWP | |
    | 16 | BITMAP AND | | | | | | Q1,01 | PCWP | |
    |* 17 | BITMAP INDEX SINGLE VALUE | BM2_MVIEW1      | | | | | Q1,01 | PCWP | |
    | 18 | BITMAP CONVERSION FROM ROWIDS| | | | | | Q1,01 | PCWP | |
    | 19 | SORT ORDER BY | | | | | | Q1,01 | PCWP | |
    |* 20 | INDEX RANGE SCAN | N3_MVIEW1 | 24100 | | 214 (3)| 00:00:03 | Q1,01 | PCWP | |
    | 21 | MAT_VIEW ACCESS BY INDEX ROWID | MVIEW2 | 1 | 34 | 3 (0)| 00:00:01 | Q1,01 | PCWP | |
    |* 22 | INDEX RANGE SCAN | U1_MVIEW2 | 1 | | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$1
    10 - SEL$1 / B@SEL$1
    13 - SEL$1 / C@SEL$1
    14 - SEL$1 / A@SEL$1
    21 - SEL$1 / D@SEL$1
    22 - SEL$1 / D@SEL$1
    Predicate Information (identified by operation id):
    14 - filter("A"." NUMBER"="B"." NUMBER")
    17 - access("A"." CODE"="B"." CODE")
    20 - access("A"." SOURCE_ID"="B"."SOURCE_ID" AND "A"." ID"="C"." ID")
    filter("C"."SOURCE_ID"="A"."SOURCE_ID" AND "A"." ID"="C"." ID" AND "A"."SOURCE_ID"="B"."SOURCE_ID")
    22 - access("A"."SOURCE_ID"="D"."SOURCE_ID"(+) AND "A"." ID"="D"." ID"(+) AND "A"."IT_ID"="D"."IT_ID"(+))
    Please help how to get back to the original completion timing.
    -thanks

    Here is the original execution plan while it complete in 20 mins,...The indexes are not been dropped...
    PLAN_TABLE_OUTPUT
    Plan hash value: 464730497
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 226 | 34578 | 100K (5)| 00:20:04 | | | |
    | 1 | PX COORDINATOR | | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10003 | 226 | 34578 | 100K (5)| 00:20:04 | Q1,03 | P->S | QC (RAND) |
    | 3 | BUFFER SORT | | 226 | 34578 | | | Q1,03 | PCWP | |
    | 4 | NESTED LOOPS OUTER | | 226 | 34578 | 100K (5)| 00:20:04 | Q1,03 | PCWP | |
    |* 5 | HASH JOIN | | 226 | 26894 | 100K (5)| 00:20:02 | Q1,03 | PCWP | |
    | 6 | PX RECEIVE | | 3491K| 69M| 534 (6)| 00:00:07 | Q1,03 | PCWP | |
    | 7 | PX SEND HASH | :TQ10001 | 3491K| 69M| 534 (6)| 00:00:07 | Q1,01 | P->P | HASH |
    | 8 | MERGE JOIN CARTESIAN | | 3491K| 69M| 534 (6)| 00:00:07 | Q1,01 | PCWP | |
    | 9 | SORT JOIN | | | | | | Q1,01 | PCWP | |
    | 10 | PX RECEIVE | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
    | 11 | PX SEND BROADCAST | :TQ10000 | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | P->P | BROADCAST |
    | 12 | PX BLOCK ITERATOR | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
    | 13 | TABLE ACCESS FULL | TABLE2      | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
    | 14 | BUFFER SORT | | 9592 | 112K| 532 (6)| 00:00:07 | Q1,01 | PCWP | |
    | 15 | PX BLOCK ITERATOR | | 9592 | 112K| 138 (0)| 00:00:02 | Q1,01 | PCWC | |
    | 16 | TABLE ACCESS FULL | TABLE1           | 9592 | 112K| 138 (0)| 00:00:02 | Q1,01 | PCWP | |
    | 17 | PX RECEIVE | | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,03 | PCWP | |
    | 18 | PX SEND HASH | :TQ10002 | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | P->P | HASH |
    | 19 | PX BLOCK ITERATOR | | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | PCWC | |
    | 20 | MAT_VIEW ACCESS FULL | MVIEW1     | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | PCWP | |
    | 21 | MAT_VIEW ACCESS BY INDEX ROWID| MVIEW2 | 1 | 34 | 3 (0)| 00:00:01 | Q1,03 | PCWP | |
    |* 22 | INDEX RANGE SCAN | U1_MVIEW2 | 1 | | 2 (0)| 00:00:01 | Q1,03 | PCWP | |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$1
    13 - SEL$1 / C@SEL$1
    16 - SEL$1 / B@SEL$1
    20 - SEL$1 / A@SEL$1
    21 - SEL$1 / D@SEL$1
    22 - SEL$1 / D@SEL$1
    Predicate Information (identified by operation id):
    5 - access("A"."SOURCE_ID"="B"."SOURCE_ID" AND "A"." CODE"="B"." CODE" AND "A"." NUMBER"="B"." NUMBER" AND
    "C"."SOURCE_ID"="A"."SOURCE_ID" AND "A"." ID"="C"." ID")
    22 - access("A"."SOURCE_ID"="D"."SOURCE_ID"(+) AND "A"." ID"="D"." ID"(+) AND "A"."IT_ID"="D"."IT_ID"(+))

  • Form query too long running

    I am dealing with an issue that I believe I have boiled it down to being a Forms issue. One of my developers has a form that is taking 40+ minutes to run a pretty complicated query. At first I believed that it was a query or development issue, however the same query can be ran from Toad or from SQLPlus in under a few seconds. I have even ran the query from SQLPlus on the forms server with the same speedy performance. The only environment in which this query takes almost an hour to run is if it is ran from her .FMX ... I am soooooo at a loss right now as to what I could do to fix this. Has anyone experienced something of this nature?
    Additionally the query returns ZERO results and this is an expected outcome so I don't believe it has to do with Toad buffering or SQLPlus return the rows as they are fetched. Anyway I'm at a loss and any help what-so-ever will be greatly appreciated.

    To show what can go wrong look at this simple example.
    HR@> CREATE TABLE a (ID VARCHAR2(10) PRIMARY KEY);
    Table created.
    HR@>
    HR@> insert into a select rownum from dual connect by rownum <= 1e6;
    1000000 rows created.
    HR@>
    HR@> set timing on
    HR@>
    HR@> select * from a where id = 100;
    ID
    100
    Elapsed: 00:00:00.34
    HR@>
    HR@> select * from a where id = '100';
    ID
    100
    Elapsed: 00:00:00.00
    HR@> explain plan for
      2* select * from a where id = 100
    HR@>
    HR@> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2248738933
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |     7 |   522  (12)| 00:00:07 |
    |*  1 |  TABLE ACCESS FULL| A    |     1 |     7 |   522  (12)| 00:00:07 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter(TO_NUMBER("ID")=100)Because of implicit conversion (as explain plan shows) select * from a where id = 100 takes longer than select * from a where id = '100'.

  • BEX WAD 7.0:  Chart Takes Long Time to Display in Portal - Query runs FAST

    I have a BEx WAD 7.0 template which contains 3 column charts (each with it's own seperate DataProvider/Query).  When the page loads...two of the charts show up right away, but one of them takes almost a minute to display on the screen (I thought it was missing at first).
    I ran all three queries in the BEx Query Analyzer (including the one for the chart that takes forever to load) and they all complete within 3 seconds of hitting "Execute."  So I don't believe it is the query causing this issue.
    The chart that doesn't show up right away does have more data to display than the other two...but I have queries/charts on other web templates that contain 3-times the data of this one and show up fine when executed in the portal.
    Anyone else having this issue or have an idea on how I can optimize the WAD charts and/or find out what is causing this issue?  Again...the query that fuels this chart completes its execution in about 3-4 seconds.
    Thank you for your time and of course points will be assigned accordingly.
    Kevin

    Hi,
    have you already checked how much time the IGS consumes when creating the charts?
    Run TA SIGS and check the statistics values.
    Regards, Kai

  • Query takes longer to run with indexes.

    Here is my situation. I had a query which I use to run in Production (oracle 9.2.0.5) and Reporting database (9.2.0.3). The time taken to run in both databases was almost the same until 2 months ago which was about 2 minutes. Now in production the query does not run at all where as in Reporting it continues to run in about 2 minutes. Some of the things I obsevred in P are 1) the optimizer_index_cost_adj parameter was changed to 20 from 100 in order to improve the performance of a paycalc program about 3 months ago. Even with this parameter being set to 20, the query use to run in 2 minutes until 2 months ago. in the last two months the GL table increased in size from 25 million rows to 27 million rows. With optimizer_index_cost_adj of 20 and Gl table of 25 million rows it runs fine, but with 27 million rows it does not run at all. If I change the value of optimizer_index_cost_adj to 100 then the query runs with 27 million rows in 2 minutes and I found that it uses full table scan. In Reporting database it always used full table sacn as found thru explain plan. CBO determines which scan is best and it uses that. So my question is that by making optimizer_index_cost_adj = 20, does oracle forces it to use index scan when the table size is 27 million rows? Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? If I drop all the indexes on the GL table then the query runs faster in production as it uses full table scan. What is the real benefit of changing optimizer_index_cost_adj values? Any input is most welcome.

    Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? No. It is not about which one is the "+fastest+" as that concept is flawed. How can an index be "faster" than a table for example? Does it have better tires and shinier paint job? ;-)
    It is about the amount of I/O that the database needs to perform in order to use that object's contents for resolving/executing that applicable SQL statement.
    If the CBO determines that it needs a 100 widgets worth of I/O to scan the index, and then another 100 widgets of I/O to scan the table, it may decide to not use the index at all, as a full table scan will cost only a 180 I/O widgets - 20 less than the combined scanning of index and table.
    Also, a full scan can make use of multi-block reads - and this, on most storage/file systems, is faster than single block reads.
    So no - a full table scan is NOT a Bad Thing (tm) and not an indicator of a problem. The thing that is of concern is the amount of I/O. The more I/O, the slower the operation. So obviously, we want to make sure that we design SQL that requires the minimal amount of I/O, design a database that support minimal I/O to find the required data (using clusters/partitions/IOTs/indexes/etc), and then check that the CBO also follows suit (which can be the complex bit).
    But before questioning the CBO, first question your code and design - and whether or not they provide the optimal (smallest) I/O footprint for the job at hand.

  • SQL query slow when issued by app, fast when issued mnaually

    Hi there,
    I have a more general question about a specific Oracle behaviour.
    I update a feature from within an application. The application doesn't respond and I finally have to terminate it. I checked Oracle whether a query is running long using the following statement:
    select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
    join v$sqltext_with_newlines q
    on s.sql_address = q.address
    where status='ACTIVE'
    and type <>'BACKGROUND'
    and last_call_et> 60
    order by sid,serial#,q.piece
    The result of the above query is:
    WITH CONNECTION AS ( SELECT * FROM WW_CONN C WHERE (C.FID_FROM I
    N (SELECT FID FROM WW_LINE WHERE FID_ATTR=:B1 ) AND C.F_CLASS_ID
    FROM =22) OR (C.FIDTO IN (SELECT FID FROM WW_LINE WHERE FID_AT
    TR=:B1 ) AND C.F_CLASS_ID_TO =22) ) SELECT MIN(P.FID_ATTR) AS FI
    D_FROM FROM CONNECTION C, WW_POINT P WHERE (P.FID = C.FID_FROM A
    ND C.F_CLASS_ID_FROM = 32 AND C.FLOW = 1) OR (P.FID = C.FID_TO A
    ND C.F_CLASS_ID_TO = 32 AND C.FLOW = 2)
    I have a different tool which shows me the binding parameter values. So I know that the value for :B1 is 5011 - the id of the feature being updated. This query runs for 20 mins and longer before it eventually stops. The update process involves multiple sql statements - so this one is not doing the update but is part of the process.
    Here is the bit I do not understand: when I run the query in SQL Developer with value 5011 for :B1 it takes 0.5 secs to return a result.
    Why is it, that the sql statement takes so long when issued by the application but takes less than a second when I run it manually?
    I sent a dump of the data to the application vendor who is not able to reproduce the issue in their environment. Could someone explain to me what happens here or give me some keywords for further research?
    We are using 11gR2, 64bit.
    Many thanks,
    Rob

    Hi Rob,
    at least you should see some differences in the statistics for the different child cursor (the one for the execution in the application should show at least a higher value for ELAPSED_TIME). I would use something like the following query to check the information for the child cursors:
    select sql_id
         , PLAN_HASH_VALUE
         , CHILD_NUMBER
         , EXECUTIONS
         , ELAPSED_TIME
         , USER_IO_WAIT_TIME
         , CONCURRENCY_WAIT_TIME
         , DISK_READS
         , BUFFER_GETS
         , ROWS_PROCESSED
      from v$sql
    where sql_id = your_sql_idRegards
    Martin

  • Query running sometimes slow and sometimes fast on both prod and dev. Help

    We are running a job that is behaving so inconsistently that I am ready to jump off the 19th floor. :-)
    This query that goes against one table, was just coming to halt in production. After 4 days of investigation we thought it was the resource on production box. Now we have another dev box where we were also having the same issue. This box gets updated with production data everyday. There is a 3rd box. DBA ran update statistics on the 3rd box and the job was never slow there. When we updated 2nd box (dev) with statistics from th 3rd box, the job also ran fine. So we thought we know for sure that it is the statistics that we need to up. Now for business testing, the 2nd and 3rd box got updated with data and statistics from production box (the troubled one). We thought surely we will see issues on the 2nd and 3rd box, but the job was just running fine on these boxes. As I said, the 2nd box gets updated with production data everyday. After last night's refresh this job is running long on the 2nd box again. We are really puzzled. Has any one experience anything like this before?
    thanks in advance.
    Reaz.

    We got our dba who is checking the plan when ever we run the job.
    The dba is running the trace right now. Here is the trace result from the trace:
    SELECT STATUS_FLAG, FSI_TID, FSI_REC_TID, AREA, VALUE_DATE, CANCEL_DATE,
    TO_DATE(ENTRY_DATE_TIME), PRODUCT, CUST_TID
    FROM
    ORD_FX WHERE WSS_GDP_SITE = :B3 AND DEAL_NUMBER = :B2 AND TICKET_AREA = :B1
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 514 0.23 0.27 0 0 0 0
    Fetch 514 253.40 247.44 0 16932188 0 514
    total 1028 253.63 247.71 0 16932188 0 514
    Misses in library cache during parse: 0
    Optimizer mode: CHOOSE
    Parsing user id: 26 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    latch: cache buffers chains 2 0.00 0.00
    no IO issue in any database at any time. We saw a little high IO in production yesterday so we thought we are looking at a resource issue, but today the system is fine and we had no luck with the job.

  • Query taking long

    HI I have a query which is running longer .
    I have checked every option of tuning this .I have attached the explain plan for the same . It seems it is doing a cartisian product .
    SELECT   analytic_source_cd,
               SUM (CASE WHEN pricing_dt = '24jan2014' THEN cnt ELSE 0 END)
                  AS Prev_Count,
               SUM (CASE WHEN pricing_dt = '27jan2014' THEN cnt ELSE 0 END)
                  AS Current_Count
        FROM   (SELECT   af.analytic_source_cd,
                           af.pricing_dt,
                           COUNT (DISTINCT af.fi_instrument_id) cnt
                    FROM   analytics_fact af,
                           fund f,
                           instrument_alternate_id iai,
                           (SELECT   pricing_dt, vendor_instrument_id, index_cd
                              FROM   fi_idx_benchmark_holdings
                             WHERE   pricing_dt IN
                                           ('24jan2014', '27jan2014')
                            UNION
                            SELECT   pricing_dt, vendor_instrument_id, index_cd
                              FROM   fi_idx_forward_holdings
                             WHERE   pricing_dt IN
                                           ('24jan2014', '27jan2014')) bh
                   WHERE     
                            af.pricing_dt = bh.pricing_dt
                           AND f.official_index = bh.index_cd
                           AND af.fi_instrument_id = iai.fi_instrument_id
                           AND bh.vendor_instrument_id = iai.alternate_id
                           AND iai.alternate_id_type_code IN ('FMR_CUSIP', 'CUSIP')
                           and  af.pricing_dt IN ('24jan2014', '27jan2014')
                           AND f.official_index IS NOT NULL
                           AND af.oad IS NOT NULL
                GROUP BY   af.analytic_source_cd, af.pricing_dt
    GROUP BY   analytic_source_cd
    ORDER BY   1;
    Please check the below .
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 210,133  Bytes: 27  Cardinality: 1              
    27 SORT GROUP BY  Cost: 210,133  Bytes: 27  Cardinality: 1             
      26 VIEW A519350. Cost: 210,133  Bytes: 27  Cardinality: 1            
       25 HASH GROUP BY  Cost: 210,133  Bytes: 26  Cardinality: 1           
        24 VIEW VIEW SYS.VM_NWVW_1 Cost: 210,133  Bytes: 26  Cardinality: 1          
         23 HASH GROUP BY  Cost: 210,133  Bytes: 87  Cardinality: 1         
          22 HASH JOIN  Cost: 210,132  Bytes: 87  Cardinality: 1        
           10 MERGE JOIN CARTESIAN  Cost: 130,054  Bytes: 63  Cardinality: 1       
            7 NESTED LOOPS  Cost: 129,831  Bytes: 61  Cardinality: 1      
             4 INLIST ITERATOR     
              3 PARTITION RANGE ITERATOR  Cost: 129,827  Bytes: 30  Cardinality: 1  Partition #: 10  Partitions accessed #KEY(INLIST)  
               2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_PORTFOLIO_DM.ANALYTICS_FACT Cost: 129,827  Bytes: 30  Cardinality: 1  Partition #: 10  Partitions accessed #KEY(INLIST) 
                1 INDEX RANGE SCAN INDEX (UNIQUE) FI_PORTFOLIO_DM.ANALYTICS_FACT_PK Cost: 667  Cardinality: 206,474  Partition #: 10  Partitions accessed #KEY(INLIST)
             6 PARTITION LIST INLIST  Cost: 4  Bytes: 31  Cardinality: 1  Partition #: 13  Partitions accessed #KEY(INLIST)   
              5 INDEX RANGE SCAN INDEX (UNIQUE) FI_REFERENCE.INSTRUMENT_ALTERNATE_ID_PPK Cost: 4  Bytes: 31  Cardinality: 1  Partition #: 13  Partitions accessed #KEY(INLIST)  
            9 BUFFER SORT  Cost: 130,050  Bytes: 1,642  Cardinality: 821      
             8 TABLE ACCESS FULL TABLE FI_REFERENCE.FUND Cost: 224  Bytes: 1,642  Cardinality: 821     
           21 VIEW A519350. Cost: 80,049  Bytes: 63,861,216  Cardinality: 2,660,884       
            20 SORT UNIQUE  Cost: 80,049  Bytes: 66,522,100  Cardinality: 2,660,884      
             19 UNION-ALL     
              14 INLIST ITERATOR    
               13 PARTITION RANGE ITERATOR  Cost: 24,599  Bytes: 25,284,850  Cardinality: 1,011,394  Partition #: 21  Partitions accessed #KEY(INLIST) 
                12 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS Cost: 24,599  Bytes: 25,284,850  Cardinality: 1,011,394  Partition #: 21  Partitions accessed #KEY(INLIST)
                 11 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS_I2 Cost: 1,973  Cardinality: 1,011,394  Partition #: 21  Partitions accessed #KEY(INLIST)
              18 INLIST ITERATOR    
               17 PARTITION RANGE ITERATOR  Cost: 36,066  Bytes: 41,237,250  Cardinality: 1,649,490  Partition #: 25  Partitions accessed #KEY(INLIST) 
                16 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS Cost: 36,066  Bytes: 41,237,250  Cardinality: 1,649,490  Partition #: 25  Partitions accessed #KEY(INLIST)
                 15 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS_I2 Cost: 3,499  Cardinality: 1,649,490  Partition #: 25  Partitions accessed #KEY(INLIST)
    could you please help if i miss anything?

    One nice best practice: do not hard code date columns, use TO_DATE function instead.
    For performance issue, check the order of join tables. For ex, you need just af.pricing_dt IN ('24jan2014', '27jan2014') date range in af table, but you join all columns matching with bh table first. Then, you give date condition for af table again. Therefore, the intermediate processed rows will be higher.
    Another nice best practice: You can use JOIN keywords, while joining tables. Writing all in where clause make the code complicated. Simplicity is not easy, but impressive.
    Regards,
    Dilek

  • Query runs forever when using selection on line item dim

    we have a cube zcube which has po number as a line item dimension .
    everymonth users run queries this cube using po number as selection
    criterai and using a wild card search on this field . every time
    queries have run fine . however , this month when we try to do so
    the query runs forever and no results are returned . i also tried
    listcubing with similar selection but it also did not return any results .
    our production system has lot of data . i tried in test system it worked fine
    i checked the cube for compression and indices in production , they look fine
    can anyone think of anything that could have gone wrong ? also did rsrv tests ..
    but all come green
    we have not done any developments on this cube , however we have shifted to
    a new hardware in the past month . can anyone think of any reasons ?
    anything that can help me catch the issue ? all suggestions welcome

    Not sure,
    Are you saying that you need both the counts seperately? or a combined count?
    how are you joining alpha and beta tables? what is the join condition?
    you need to do something like this,
    SELECT COUNT(CASE WHEN
                        (A.col1 = 'Pete' AND SUBSTR(A.col2,,1,12)=SUBSTR(B.col2,,1,13))
                     OR ( A.col1 != 'Pete' AND SUBSTR(A.col2,,1,15)=SUBSTR(B.col2,,1,15))
                    THEN 1 ELSE 0
                 END)
    FROM alpha A, beta b
    WHERE alpha.join_cloumn= beta.join_columnG.

Maybe you are looking for

  • Can AVCHD files be not copied when imported?

    When I import AVCHD files they are copied even though in the preferences I have selected "Leave files in place" and I leave that option checked in the import dialog. Is there any way to prevent AVCHD files from being copied? Other types of files are

  • Transport system objects i.e. contents of table t100 from one system to ano

    hello all, how do i transport contents of table t100 from one system to another... tried creating a transport request... but when i got an error when used r3tr tabu t100... i got a pop up saying "System objects cannot be transported directly" know ab

  • 'Memory Full' is driving me MAD !!!

    Hi everyone, I have a N97, it was working quite well (but with some slowness), till I did format it by *#7370# Now, everytime and almost everywhere (especially in Messaging), it keeps giving me that error message 'Full Memory' and the msg keeps appe

  • 2.0 View PowerPoint, Not Keynote?

    Why is Apple going to allow PowerPoint, Word and Excel viewing, but no mention of Keynote, Pages or Numbers? Have Mac users become the red-headed step-child? http://www.apple.com/pr/library/2008/03/06iphone.html "In addition to these new iPhone netwo

  • Indesign CC 2014: 1381: Error: Invalid version number.

    After using the new indesign CC 2014 for a few days I am now getting a very strange error when I copy and paste. When I copy I get this message: Then when I paste I get this error: It does copy and paste what I selected, but getting these error dialo