To_Date Slow in Where Clause

Hi,
I have Simple Oraclae query..
Table: testTable
Field: timestamp
Data Type: Date
timestamp filed date like '2/2/2012 10:15:11 AM' format.
Index : No index On this table
i want fetch the records between dates..
while execution the query its take long time while run the query if we Don't have any data in table beetween date.
if we have data between dates.. got result...
query look like
select * from TestTable
where timestamp between TO_Date('Feb 3 2012 10:00AM', 'MM/dd/yyyy HH12:MI AM') and sysdate
as per above query , i don't have data after 3 rd feb... its took long time for geting result..
how can i decrese the execution time...
Edited by: 919325 on Mar 7, 2012 3:47 AM

Hi,
I did a test with this data
SQL> with data as (
select 'a' as a, systimestamp-30 as timestamp from dual
union all
select 'b', systimestamp-20 from dual
union all
select 'c', systimestamp-10 from dual
union all
select 'd', systimestamp-5 from dual
union all
select 'e', systimestamp from dual)
select * from data;
A TIMESTAMP
a 06-FEB-12 12.44.34.000000 PM +01:00
b 16-FEB-12 12.44.34.000000 PM +01:00
c 26-FEB-12 12.44.34.000000 PM +01:00
d 02-MAR-12 12.44.34.000000 PM +01:00
e 07-MAR-12 12.44.34.529572 PM +01:00
SQL> with data as (
select 'a' as a, systimestamp-30 as timestamp from dual
union all
select 'b', systimestamp-20 from dual
union all
select 'c', systimestamp-10 from dual
union all
select 'd', systimestamp-5 from dual
union all
select 'e', systimestamp from dual)
select * from data where timestamp > to_date('2012-03-01 10:00', 'YYYY-MM-DD HH24:MI');
A TIMESTAMP
d 02-MAR-12 12.45.23.000000 PM +01:00
e 07-MAR-12 12.45.23.074227 PM +01:00and the execution was very fast. Even if I remove the two latest records, so that the query returns no rows, the execution is still fast.
How many rows does your table consist of? What Oracle-version are you on?
//Johan

Similar Messages

  • The to_date() error in where clause of proc code

    In a proc code, firstly I use "SELECT to_char(f_date_1, 'DDMONYY') into :char_date FROM TBL_1;" to get the value of f_date_1.
    Then I use "SELECT ID FROM TBL_2 WHERE f_date_2 = :char_date;" to get the value of ID according char_date.
    In table, the type of f_date_1 and f_date_2 is DATE.
    char_date is "char char_date[10]".
    My question is:
    1. If I use WHERE f_date_2 = to_date(:char_date, 'DDMONYY'), it will prompt: "-1861, ORA-01861: literal does not match format string".
    2. If I use WHERE f_date_2 = :char_date, it will work well.
    Since the type of f_date_2 is DATE and the type of char_date is CHAR, why does to_date(:char_date, 'DDMONYY') have problems?
    Thanks.

    I was not able to duplicate the error you got, but I did spot something that may be affecting your results:
    *Set up test table using HR.EMPLOYEES*
    create table TEST_DATE as
    select employee_id, last_name, hire_date
    from employees
    var char_date char(10)
    begin
    select to_char(hire_date,'DDMONYY') into :char_date
    from test_date
    where rownum = 1;
    end;
    *case 1*
    select *
    from test_date
    where hire_date = to_date(:char_date,'DDMONYY')
    *case 2*
    select *
    from test_date
    where hire_date = :char_date --> this assumes a implicit char to date conversionWhile I got case 2 to work, it assumes that the database is configured to recognize the character mask as a valid string format. If you tried running that query on a different database, this query may fail as well.
    To verify why case 1 may not have worked, do a quick check:
    select :char_date from dualIt is possible that you may have applied a different character mask between your first query:
    >
    SELECT to_char(f_date_1, 'DDMONYY') into :char_date FROM TBL_1;
    >
    and the WHERE clause of your second query:
    >
    WHERE f_date_2 = to_date(:char_date, 'DDMONYY')
    >
    RP

  • To_Date function in the Where Clause

    Hello All,
    I'm having an issue using the to_date function that has me quite perplexed.
    I have two varchar2 fields, one with a date value in the format Mon, DD YYYY, the other has a time value in the format HH:MI PM.
    When I run my query one of the columns I retrieve looks like this TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM'). The two fields are concatenated together and converted to a date. This works fine.
    My problem occurs when I attempt to apply the same logic to the where clause of the aforementioned query. e.g. when I add the following criteria to my query and TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM') <= sysdate I get an ORA-01843: not a valid month error.
    To further illustrate my problem here are the two queries:
    Select d4.adate, e4.atime, TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM')
    from ....
    where ....
    The above query works.
    Select d4.adate, e4.atime, TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM')
    from ....
    where ....
    and TO_DATE (d4.adate || e4.atime, 'Mon DD, YYYYHH:MI PM') <= sysdate
    The second query does not work.
    The tables used and the limiting criteria are identical, except for the last one.
    Does anyone have any ideas why this could be happening.
    er

    Hello,
    Check this out. It does work. Do cut n paste sample
    data from your tables.
    SQL> desc test
    Name Null? Type
    ID NUMBER
    DDATE VARCHAR2(20)
    DTIME VARCHAR2(20)
    SQL> select * from test;
    ID DDATE DTIME
    1 Jan, 10 2006 12:32 PM
    2 Mar, 11 2005 07:10 AM
    3 Apr, 13 2006 03:12 AM
    4 Nov, 15 2003 11:22 PM
    5 Dec, 20 2005 09:12 AM
    6 Oct, 30 2006 10:00 AM
    7 Jan, 10 2006 12:32 PM
    8 Apr, 11 2005 07:10 AM
    9 May, 13 2006 03:12 AM
    10 Sep, 15 2003 11:22 PM
    11 Oct, 20 2005 09:12 AM
    12 Dec, 30 2006 10:00 AM
    12 rows selected.
    SQL> select id, ddate, dtime,
    2 to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM') AA,
    A,
    3 to_char(to_date(ddate||dtime,'Mon, DD YYYYHH:MI
    MI PM'),'Mon, DD YYYYHH:MI PM') BB
    4 from test;
    ID DDATE DTIME
    DTIME AA BB
    1 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06 Jan, 10 200612:32 PM
    2 Mar, 11 2005 07:10 AM
    07:10 AM 11-MAR-05 Mar, 11 200507:10 AM
    3 Apr, 13 2006 03:12 AM
    03:12 AM 13-APR-06 Apr, 13 200603:12 AM
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03 Nov, 15 200311:22 PM
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05 Dec, 20 200509:12 AM
    6 Oct, 30 2006 10:00 AM
    10:00 AM 30-OCT-06 Oct, 30 200610:00 AM
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06 Jan, 10 200612:32 PM
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05 Apr, 11 200507:10 AM
    9 May, 13 2006 03:12 AM
    03:12 AM 13-MAY-06 May, 13 200603:12 AM
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03 Sep, 15 200311:22 PM
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05 Oct, 20 200509:12 AM
    12 Dec, 30 2006 10:00 AM
    10:00 AM 30-DEC-06 Dec, 30 200610:00 AM
    12 rows selected.
    SQL> select id, ddate, dtime,
    to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    2 from test
    3 where id > 3
    4 and to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    ') <= trunc(sysdate);
    ID DDATE DTIME
    DTIME TO_DATE(D
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05
    6 rows selected.
    SQL> select id, ddate, dtime,
    to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    2 from test
    3 where id > 3
    4 and to_date(ddate||dtime,'Mon, DD YYYYHH:MI PM')
    ') <= sysdate;
    ID DDATE DTIME
    DTIME TO_DATE(D
    4 Nov, 15 2003 11:22 PM
    11:22 PM 15-NOV-03
    5 Dec, 20 2005 09:12 AM
    09:12 AM 20-DEC-05
    7 Jan, 10 2006 12:32 PM
    12:32 PM 10-JAN-06
    8 Apr, 11 2005 07:10 AM
    07:10 AM 11-APR-05
    10 Sep, 15 2003 11:22 PM
    11:22 PM 15-SEP-03
    11 Oct, 20 2005 09:12 AM
    09:12 AM 20-OCT-05
    6 rows selected.
    -SriSorry Sri, but I fail to see what you mean. How is what you're doing any different than what I'm doing?

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • Very slow query when specifying where clause

    Hi folks:
    I am using SAS with SQL Passthrough for Oracle, however that may be irrelevant to the problem.
    I have a query that I am subsetting based on a series of where clauses, and when I put the clauses into the query, it slows it down tremendously.
    I can pull the entire table and subset it later on in SAS and it is much faster.
    Can someone explain why the SQL subsetting logic runs slower than pulling 18M rows and subsetting later?
    Here is my query - hope this is enough information
    select
    a.loan_number
    , a.CFLRC_CD
    , a.cservno
    , b.cprj_typ
    , a.occ_stat
    , c.state2
    , c.city2
    , e.hstatus
    , e.hcattype
    , e.dqind
    , e.mthsdel
    from
    VEW_LL_ln_1 a
    , vew_ll_ln_2 b
    , VEW_LL_PROP_GEO_CD c
    , vew_ll_ln_actvy_1 e
    where
    a.loan_number = b.loan_number
    and a.loan_number = c.loan_number
    and a.loan_number = e.loan_number
    and e.act_dte = '01-feb-2011'
    and not (occ_stat in ('2','3'))
    and hcattype <> '800'
    and dqind not in ('E','F','G','H')
    and cprj_typ not in ('1','2','3','4','5','16','17','18','19','20','21','22')
    and cflrc_cd <> '2'
    order by
    a.loan_number
    Thanks

    You're in the wrong forum. This is the forum for the SQL Developer tool. You will get better answers in the SQL and PL/SQL forum which is here PL/SQL

  • Problem with to_date('date_char') in a where clause

    A little confused with this plsql error.
    This code works...
    select to_date(value,'dd/mm/yy') from kip_lov where domain='MODULE START1'
    If I try to put the "to_date(value,'dd/mm/yy')" expression in the where clause I get an invalid year error,
    select to_date(value,'dd/mm/yy') from kip_lov where domain='MODULE START1'
    AND to_date(value,'dd/mm/yy') = SYSDATE
    "Error report:
    SQL Error: ORA-01841: (full) year must be between -4713 and +9999, and not be 0"
    The variable "value" is a varchar2 in the format dd/mm/yy.
    Note: the following code also works fine,
    select to_date(value,'dd/mm/yy') - sysdate from kip_lov where domain='MODULE START1'
    I have modified my lov table so that it uses the date type now and it works fine. Just curious as to why the above example does not work.
    Sam.

    check your table description and values first, is there any dd/mm/yy or mm/dd/yy confliction
    A scenario which runs.
    SQL> desc trya
    Name Null? Type
    UPDATED_ON DATE
    UPD VARCHAR2(8)
    SQL> select * from trya
    2 ;
    UPDATED_O UPD
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    UPDATED_O UPD
    31-JUL-06 31/07/06
    31-JUL-06 31/07/06
    13 rows selected.
    SQL> SELECT * FROM trya WHERE TO_CHAR(TO_DATE(upd,'dd/mm/yy'))=SYSDATE;
    no rows selected
    SQL> SELECT * FROM trya WHERE TO_DATE(upd,'dd/mm/yy')=SYSDATE;
    no rows selected
    SQL>
    NO error on both syntax
    rgds,
    Rup

  • Query slow down when added a where clause

    I have a procedure that has performance issue, so I copy some of the query and run in the sql plus and try to spot which join cause the problem, but I get a result which I can figuer out why. I have a query which like below:
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    WHERE a.TypeID = 2;
    TableA has 140000 records, when the where clause is not added, the count return quite quick, but if I add the where clause, then the query slow down and seems never return so I have to kill my SQL Plus session. TableA has index on TypeID and TypeID is a number type. When TablA has 3000 records, the procedure return very quick, but it slow down and hang there when the TableA contains 140000 records. Any idea why this will slow down the query?
    Also, the TypeID is a foreign key to another table (TableAType), so the query above can written as :
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    INNER JOIN TableAType atype ON a.TypeID = atype.ID
    WHERE atype.Name = 'typename';
    TableAType table is a small table only contains less than 100 records, in this case, would the second query be more efficient to the first query?
    Any suggestions are welcome, thanks in advance...
    Message was edited by:
    user500168

    TableA now has 230000 records and 28000 of them has the TypeID 2.
    I haven't use the hint yet but thank you for your reply which let me to to run a query to check how many records in TableA has TypeID 2. When I doing this, it seems pretty fast. So I begin with the select count for TableA only and gradually add table to join and seems the query is pretty fast as long as TableA is the fist table to select from.
    Before in my query TableA is the second table to join from, there is another table (which is large as well but not as large as TableA) before TableA. So I think this is why it runs slow before. I am not at work yesterday so the query given in my post is based on my roughly memory and I forget to mention another table is joined before TableA, really sorry about that.
    I think I learn a lesson here, the largest table need to be in the begining of the select statement...
    Thank you very much everyone.

  • Where clause causing a query to slow down in a cursor

    I have a table "the_table" with about 10,000 rows and four columns (id, description, inventory, and category).
    This query returns all rows immediately:
        select id
             , description
             , inventory
        from the_table
        where category =  nvl(null, category);
    However, when it is put into a cursor, like this:
    (p_user_category is a user-defined variable which can be null if the user wants all of the rows)
        c_results  sys_refcursor;
        open c_results for
            select id
                 , description
                 , inventory
            from the_table
            where category =  nvl(p_user_category, category);
        fetch c_results into v_id, v_description, v_inventory;
        close c_results;
    then it takes five minutes to return even just one row when p_user_category is null.
    However, if I change the where clause to:
        where (p_user_category is null or category = p_user_category)
    then it returns all rows immediately.
    It started being a problem right around the time of the most recent update - I am running 11.2.0.2.0 (64-bit production).

    The optimizer is smart enough to recognise that null is null, so "nvl(null,category)" collapses to "category", and your predicate "category = nvl(null, category)" is transformed to "category is not null" (if it's allowed to be null) or simply disappears - so the SQL test is not the same as the PL/SQL run.
    In PL/SQL your manual rewrite is not logicall the same as the original unless you have declared category to be non-null because your first disjunct will allow rows with a null category to be reported, while the original query would lose them.
    Check the execution plans for the SQL and the PL/SQL versions.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Now on Twitter: @jloracle

  • Simple finder results in huge slow sql query -- why WHERE clause?

    Hi all,
    wondering about the poor performance of my finder methods I took a look into the SQL that went to the database.
    "SELECT DISTINCT OBJECT(p) FROM Person p"
    results in
    SELECT PersonID, FirstName, LastName, [etc.] FROM person WHERE (PersonID='5354B71CC0A80113008BD3BB11A57FA1') OR (PersonID='5354B893C0A80113008BD3BB3C6918FF') OR (PersonID='5354B910C0A80113008BD3BBC83093BE') OR [etc.]
    where all 2000 primary keys are listed. This query costs ~ 4 seconds on my box.
    A query without WHERE costs ~ 0 seconds. So why this "WHERE" clause?
    Is this a JBoss specific problem? I am using version 3.2.3.
    thanx!
    Marcus

    It's not a bug - it's how JBoss implements it's own
    "findAll" methodology. Whether it's a good
    implementation or not is open to debate.Really strange.
    Anyway all my performance problems disappeared with setting the read-ahead strategy to "on-find". Although I don't fully understand why.
    Marcus

  • Date functions in WHERE clause? HELP

    The following two queries are identical except for how I supply the date values in the where clause, yet the first query using the a custom my_date function runs 30x slower than the one using the TO_DATE() function. Both return DATE types...any reason for the difference?
    SELECT * from outcomes
    WHERE start_time >=fn_my_date('START_MONTH')
    AND start_time < fn_my_date('END_MONTH')+1
    -- This runs 30x faster--
    SELECT * from outcomes
    WHERE start_time >=TO_DATE('08/01/2001','MM/DD/YYYY')
    AND start_time < TO_DATE('08/31/2001','MM/DD/YYYY')+1
    On the flip side, I've also experienced queries running slower using the TO_DATE() function vs the LAST_DAY(sysdate) for equivalenet dates.
    null

    I haven't seen the message coming up when using LENGTH or SUBSTR, but every time I connect to a database or attempt to change my preferences (including the "NLS Parameters: Comp" preference), this appears.
    You are attempting to set the preference you should be to switch this from ANSI to something else, but SQL Developer is ignoring the preference setting (which I think is a bug).
    A way to set the NLS_COMP to something else is to use something like "alter session set nls_comp = BINARY;". Note that changing any preference after that appears to overwrite this and you need to do this for each new connection you start.

  • Date in where clause without Trunc

    Hi,
    How handle a date comparision in where clause without trunc. Rightnow I am using it with 'trunc', it is very slow..
    Select ColumnA,  ColumnB
    From Sample
    Where trunc(created_date,'DD-MON-YY') = '15-NOV-2011';
    CREATE TABLE SAMPLE
         ColumnA NUMBER(9,0)           NOT NULL,
         ColumnB VARCHAR2(255)      NOT NULL,
         created_date                DATE
       );

    Hi,
    Here's one way:
    SELECT  ColumnA,  ColumnB
    FROM      Sample
    WHERE      created_date     >= TO_DATE ('15-NOV-2011', 'DD-MON-YYYY')
    AND     created_date     <  TO_DATE ('15-NOV-2011', 'DD-MON-YYYY') + 1
    ;If you have an index on created_date. then this query will be able to use that index, because created_date is alone on one side of the operator.
    Another reason why this will be faster is that it only has to call TO_DATE twice, regardless of how many rows are in the table. The original query had to call TRUNC for every row.
    Don't compare a DATE (such as the results of TRUNC) to a string (such as '15-NOV-2011'). You're liable to get a run-time error, or, if youre not so lucky, unexpected results.
    Edited by: Frank Kulash on Nov 15, 2011 5:22 PM

  • Response time of query utterly upside down because of small where clause change

    Hello,
    I'm wondering why a small change on a where clause in a query has a dramatic impact on its response time.
    Here is the query, with its plan and a few details:
    select * from (
    SELECT xyz_id, time_oper, ...
         FROM (SELECT 
                        d.xyz_id xyz_id,
                        TO_CHAR (di.time_operation, 'DD/MM/YYYY') time_oper,
                        di.time_operation time_operation,
                        UPPER (d.delivery_name || ' ' || d.delivery_firstname) custname,
                        d.ticket_language ticket_language, d.payed,
                        dsum.delivery_mode delivery_mode,
                        d.station_delivery station_delivery,
                        d.total_price total_price, d.crm_cust_id custid,
                        d.bene_cust_id person_id, d.xyz_num, dpe.ers_pnr ers_pnr,
                        d.delivery_name,
                        TO_CHAR (dsum.first_travel_date, 'DD/MM/YYYY') first_traveldate,
                        d.crm_company custtype, UPPER (d.client_name) partyname,
                        getremark(d.xyz_num) remark,
                        d.client_app, di.work_unit, di.account_unit,
                        di.distrib_code,
                        UPPER (d.crm_name || ' ' || d.crm_firstname) crm_custname,
                       getspecialproduct(di.xyz_id) specialproduct
                   FROM xyz d, xyz_info di, xyz_pnr_ers dpe, xyz_summary dsum
                  WHERE d.cancel_state = 'N'
                 -- AND d.payed = 'N'
                    AND dsum.delivery_mode NOT IN ('DD')
                    AND dsum.payment_method NOT IN ('AC', 'AG')
                    AND d.xyz_blocked IS NULL
                    AND di.xyz_id = d.xyz_id
                    AND di.operation = 'CREATE'
                    AND dpe.xyz_id(+) = d.xyz_id
                    AND EXISTS (SELECT 1
                                  FROM xyz_ticket dt
                                 WHERE dt.xyz_id = d.xyz_id)
                    AND dsum.xyz_id = di.xyz_id
               ORDER BY di.time_operation DESC)
        WHERE ROWNUM < 1002
    ) view
    WHERE view.DISTRIB_CODE in ('NS') AND view.TIME_OPERATION > TO_DATE('20/5/2013', 'dd/MM/yyyy')
    plan with "d.payed = 'N'" (no rows, *extremely* slow):
    | Id  | Operation                          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |*  1 |  VIEW                              |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |*  2 |   COUNT STOPKEY                    |                  |       |       |            |          |
    |   3 |    VIEW                            |                  |  1001 |  4166K| 39354   (1)| 00:02:59 |
    |   4 |     NESTED LOOPS OUTER             |                  |  1001 |   130K| 39354   (1)| 00:02:59 |
    |   5 |      NESTED LOOPS SEMI             |                  |   970 |   111K| 36747   (1)| 00:02:47 |
    |   6 |       NESTED LOOPS                 |                  |   970 |   104K| 34803   (1)| 00:02:39 |
    |   7 |        NESTED LOOPS                |                  |   970 | 54320 | 32857   (1)| 00:02:30 |
    |*  8 |         TABLE ACCESS BY INDEX ROWID| XYZ_INFO         |    19M|   704M| 28886   (1)| 00:02:12 |
    |   9 |          INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5     | 36967 |       |   296   (2)| 00:00:02 |
    |* 10 |         TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY      |     1 |    19 |     2   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | SB11_DSMM_XYZ_UK |     1 |       |     1   (0)| 00:00:01 |
    |* 12 |        TABLE ACCESS BY INDEX ROWID | XYZ              |     1 |    54 |     2   (0)| 00:00:01 |
    |* 13 |         INDEX UNIQUE SCAN          | XYZ_PK           |     1 |       |     1   (0)| 00:00:01 |
    |* 14 |       INDEX RANGE SCAN             | DNTI_NI1         |    32M|   249M|     2   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | XYZ_PNR_ERS      |     1 |    15 |     4   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN             | DNPE_XYZ         |     1 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
      1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
      2 - filter(ROWNUM<1002)
      8 - filter("DI"."OPERATION"='CREATE')
    10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
    11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
    12 - filter("D"."PAYED"='N' AND "D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
                  ^^^^^^^^^^^^^^
    13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
    14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
    16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
    plan with "d.payed = 'N'" (+/- 450 rows, less than two minutes):
    | Id  | Operation                          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                  |  1001 |  4166K| 58604   (1)| 00:04:27 |
    |*  1 |  VIEW                              |                  |  1001 |  4166K| 58604   (1)| 00:04:27 |
    |*  2 |   COUNT STOPKEY                    |                  |       |       |            |          |
    |   3 |    VIEW                            |                  |  1002 |  4170K| 58604   (1)| 00:04:27 |
    |   4 |     NESTED LOOPS OUTER             |                  |  1002 |   130K| 58604   (1)| 00:04:27 |
    |   5 |      NESTED LOOPS SEMI             |                  |  1002 |   115K| 55911   (1)| 00:04:14 |
    |   6 |       NESTED LOOPS                 |                  |  1476 |   158K| 52952   (1)| 00:04:01 |
    |   7 |        NESTED LOOPS                |                  |  1476 | 82656 | 49992   (1)| 00:03:48 |
    |*  8 |         TABLE ACCESS BY INDEX ROWID| XYZ_INFO         |    19M|   704M| 43948   (1)| 00:03:20 |
    |   9 |          INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5     | 56244 |       |   449   (1)| 00:00:03 |
    |* 10 |         TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY      |     1 |    19 |     2   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | AAAA_DSMM_XYZ_UK |     1 |       |     1   (0)| 00:00:01 |
    |* 12 |        TABLE ACCESS BY INDEX ROWID | XYZ              |     1 |    54 |     2   (0)| 00:00:01 |
    |* 13 |         INDEX UNIQUE SCAN          | XYZ_PK           |     1 |       |     1   (0)| 00:00:01 |
    |* 14 |       INDEX RANGE SCAN             | DNTI_NI1         |    22M|   168M|     2   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | XYZ_PNR_ERS      |     1 |    15 |     4   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN             | DNPE_XYZ         |     1 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
       2 - filter(ROWNUM<1002)
       8 - filter("DI"."OPERATION"='CREATE')
      10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
      11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
      12 - filter("D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
      13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
      14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
      16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
    XYZ.PAYED values breakdown:
    P   COUNT(1)
    Y   12202716
    N    9430207
    tables nb of records:
    TABLE_NAME           NUM_ROWS
    XYZ                  21606776
    XYZ_INFO            186301951
    XYZ_PNR_ERS           9716471
    XYZ_SUMMARY          21616607
    Everything that comes inside the "select * from(...) view" parentheses is defined in a view. We've noticed that the line "AND d.payed = 'N'" (commented above) is the guilty clause: the query takes one or two seconds to return between 400 and 500 rows if this line is removed, when included in the query, the response time then switches to *hours* -sic !- but then the result set is empty (no rows returned). The plan is exactly the same whether this "d.payed = 'N'" is added or removed, I mean the nb of steps, access paths, join order etc., only the rows/bytes/cost columns values change, as you can see.
    We've found no other way of solving this perf issue but by taking out this "d.payed = 'N'" condition and setting it outside the view along with view.DISTRIB_CODE and view.TIME_OPERATION.
    But we would like to understand why such a small change on the XYZ.PAYED column turns everything upside down that much, and we'd like to be able to tell the optimizer to perform this check on payed = 'N' by itself in the end, just like we did, through the use of a hint if possible...
    Anybody ever encountered such a behaviour before ? Do you have any advice regarding the use of a hint to reach the same response time as that we've got by setting the payed = N condition outside of the view definition ??
    Thanks a lot in advance.
    Regards,
    Seb

    I am really sorry I couldn't get back earlier to this forum...
    Thanks to you all for your answers.
    First I'd just like to correct a small mistake I made, when writing
    "the query takes one or two seconds": I meant one or 2 *minutes*. Sorry.
    > What table/columns are indexed by "DNTI_NI1"?
    aaaa.dnti_ni1 is an index ON aaaa.xyz_ticket(xyz_id, ticket_status)
    > And what are the indexes on xyz table?
    Too many:
    XYZ_ARCHIV_STATE_IND           ARCHIVE_STATE
    XYZ_BENE_CUST_ID_IND           BENE_CUST_ID
    XYZ_BENE_TTL_IND               BENE_TTL
    XYZ_CANCEL_STATE_IND           CANCEL_STATE
    XYZ_CLIENT_APP_NI              CLIENT_APP
    XYZ_CRM_CUST_ID_IND            CRM_CUST_ID
    XYZ_DELIVE_MODE_IND            DELIVERY_MODE
    XYZ_DELIV_BLOCK_IND            DELIVERY_BLOCKED
    XYZ_DELIV_STATE_IND            DELIVERY_STATE
    XYZ_XYZ_BLOCKED                XYZ_BLOCKED
    XYZ_FIRST_TRAVELDATE_IND       FIRST_TRAVELDATE
    XYZ_MASTER_XYZ_IND             MASTER_XYZ_ID
    XYZ_ORG_ID_NI                  ORG_ID
    XYZ_PAYMT_STATE_IND            PAYMENT_STATE
    XYZ_PK                         XYZ_ID
    XYZ_TO_PO_IDX                  TO_PO
    XYZ_UK                         XYZ_NUM
    For ex. XYZ_CANCEL_STATE_IND on CANCEL_STATE seems superfluous to me, as the column may only contain Y or N (or be null)...
    > Have you traced both cases to compare statistics? What differences did it reveal?
    Yes but it only shows more of *everything* (more tables blocks accessed, the same
    for indexes blocks, for almost all objects involved) for the slowest query !
    Greping WAIT on the two trc files made for every statement and counting the
    object IDs access show that the quicker query requires much less I/Os; the
    slowest one overall needs much more blocks to be read (except for the indexes
    DNSG_NI1 or DNPE_XYZ for example). Below I replaced obj# with the table/index
    name, the first column is the figure showing how many times the object was
    accessed in the 10053 file (I ctrl-C'ed my second execution ofr course, the
    figures should be much higher !!):
    [login.hostname] ? grep WAIT OM-quick.trc|...|sort|uniq -c
        335 XYZ_SUMMARY
      20816 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
        192 XYZ
       4804 XYZ_INFO
        246 XYZ_SEGMENT
          6 XYZ_REMARKS
         63 XYZ_PNR_ERS
        719 XYZ_PK           (index on xyz.xyz_id)
       2182 DNIN_IDX_NI5     (index on xyz.xyz_id)
        877 DNSG_NI1         (index on xyz_segment.xyz_id, segment_status)
        980 DNTI_NI1         (index on xyz_ticket.xyz_id, ticket_status)
        850 DNPE_XYZ         (index on xyz_pnr_ers.xyz_id)
    [login.hostname] ? grep WAIT OM-slow.trc|...|sort|uniq -c
       1733 XYZ_SUMMARY
      38225 AAAA_DSMM_XYZ_UK  (index on xyz_summary.xyz_id)
       4359 XYZ
      12536 XYZ_INFO
         65 XYZ_SEGMENT
         17 XYZ_REMARKS
         20 XYZ_PNR_ERS
       8598 XYZ_PK
       7406 DNIN_IDX_NI5
         29 DNSG_NI1
       2475 DNTI_NI1
         27 DNPE_XYZ
    The overwhelmingly dominant wait event is by far 'db file sequential read':
    [login.hostname] ? grep WAIT OM-*elect.txt|cut -d"'" -f2|sort |uniq -c
         36 SQL*Net message from client
         38 SQL*Net message to client
    107647 db file sequential read
          1 latch free
          1 latch: object queue header operation
          3 latch: session allocation
    > It will be worth knowing the estimations...
    It show the same plan with a higher cost when PAYED = N is added:
    SQL> select * from sb11.dnr d
      2* where d.dnr_blocked IS NULL and d.cancel_state = 'N'
    SQL> /
    | Id  | Operation                   | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                      |  1002 |   166K|    40   (3)| 00:00:01 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| XYZ                  |  1002 |   166K|    40   (3)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | XYZ_CANCEL_STATE_IND |       |       |     8   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("D"."XYZ_BLOCKED" IS NULL)
       2 - access("D"."CANCEL_STATE"='N')
    SQL> select * from sb11.dnr d
      2  where d.dnr_blocked IS NULL and d.cancel_state = 'N'
      3* and d.payed = 'N'
    SQL> /
    Execution Plan
    Plan hash value: 1292668880
    | Id  | Operation                   | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                      |  1001 |   166K|    89   (3)| 00:00:01 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| XYZ                  |  1001 |   166K|    89   (3)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | XYZ_CANCEL_STATE_IND |       |       |    15   (0)| 00:00:01 |

  • Urgent: Performance problem with where clause using IN and an OR condition

    Select statement is:
    select fl.feed_line_id
    from ap_expense_feed_lines_all fl
    where ((:1 is not null and
    fl.feed_line_id in (select distinct r2.object_id
    from xxdl_pcard_wf_routing_lists r2,
         per_people_f hr2
    where upper(hr2.full_name) like upper(:1||'%')
              and hr2.person_id = r2.person_id
    and r2.fyi_list is null
              and r2.sequence_number <> 0))
    or
    (:1 is null))
    If I modify the statement to remove the "or (:1 is null))" part at the bottom of the where clause, it returns in .16 seconds. If I modify the statement to only contain the "(:1 is null))" part of the where clause, it returns in .02 seconds. With the whole statement above, it returns in 477 seconds. Anyone have any suggestions?
    Explain plan for the whole statement is:
    (1) SELECT STATEMENT CHOOSE
    Est. Rows: 10,960 Cost: 212
    FILTER
    (2) TABLE ACCESS FULL AP.AP_EXPENSE_FEED_LINES_ALL [Analyzed]
    (2) Blocks: 8,610 Est. Rows: 10,960 of 209,260 Cost: 212
    Tablespace: APD
    (6) TABLE ACCESS BY INDEX ROWID HR.PER_ALL_PEOPLE_F [Analyzed]
    (6) Blocks: 4,580 Est. Rows: 1 of 85,500 Cost: 2
    Tablespace: HRD
    (5) NESTED LOOPS
    Est. Rows: 1 Cost: 4
    (3) TABLE ACCESS FULL XXDL.XXDL_PCARD_WF_ROUTING_LISTS [Analyzed]
    (3) Blocks: 19 Est. Rows: 1 of 1,303 Cost: 2
    Tablespace: XXDLD
    (4) UNIQUE INDEX RANGE SCAN HR.PER_PEOPLE_F_PK [Analyzed]
    Est. Rows: 1 Cost: 1
    Thanks in advance,
    Peter

    Thanks for the reply, but I have already checked what you are suggesting and I am pretty sure those are not causing the problem. The hr2.full_name column has an upper index and the (4) line of the explain plan shows that index being used. In addition, that part of the query executes on its own quickly.
    Because the sql is not displayed in an indented format on this page it is a little hard to understand the structure so I am going to restate what is happening.
    My sql is:
    select a_column
    from a_table
    where ((:1 is not null) and a_column in (sub-select statement)
    or
    (:1 is null))
    The :1 bind variable is set to a varchar2 entered on the screen of an application.
    If I execute either part of the sql without the OR condition, performance is good.
    If the :1 bind variable is null with the whole sql statement (so all rows or a_table are returned), performance is still good.
    If the :1 bind variable is a not-null value with the whole sql statement, performance stinks.
    As an example:
    where (('wa' is not null) and a_column in (sub-select statement)) -- fast
    where (('wa' is null)) -- fast
    where (('' is not null) and a_column in (sub-select statement) -- fast
    or
    ('' is null))
    where (('wa' is not null) and a_column in (sub-select statement) -- slow
    or
    ('wa' is null))

  • Performance - composite index with 'OR' in 'WHERE' clause

    I have a problem with the performance of the following query:
    select /*+ index_asc(omschact oma_index1) */ knr, projnr, actnr from omschact where ((knr = 100 and actnr > 30) or knr > 100)
    and rownum = 1;
    (rownum used only for test purpose)
    index:
    create index on omschact (knr, projnr);
    Execution plan:
    Id Operation
    0 SELECT STATEMENT
    1 COUNT STOPKEY
    2 TABLE ACCESS BY INDEX ROWID
    3 INDEX FULL SCAN
    If I'm correct, the 'OR' in the 'WHERE' clause is responsible for the INDEX FULL SCAN, what makes the query slow.
    A solution would be then to separate the 'WHERE' clause in 2 separate select's (1 with 'knr = 100 and actnr > 30' and 1 with 'knr > 100' and combine the results with a UNION ALL.
    Since it's necessary to have all rows in ascending order (oma_index1) I still have to use an ORDER BY to make sure the order of the rows is correct. This results again in a (too) low performance.
    Another solution that does the trick is to create an index with the 2 fields (knr, projnr) concatenated and to use the same in the 'WHERE' clause:
    create index oma_index2 on omschact (knr || projnr);
    select /*+ index_asc(omschact oma_index2) */ knr, projnr, actnr from omschact where (knr || projnr) > 10030;
    I just can't believe this work-around is the only solution, so I was hoping that someone here knows of a better way to solve this.

    padders,
    I'll give the real data instead of the example. The index I really use consists of 4 fields. In this table the fields are just numbers, but in other tables I need to use char-fields in indexes, so that's why I concatenate instead of using formula's (allthough I would prefer the latter).
    SQL> desc omschact
    Name Null? Type
    KNR NOT NULL NUMBER(8)
    PROJNR NOT NULL NUMBER(8)
    ACTNR NOT NULL NUMBER(8)
    REGELNR NOT NULL NUMBER(3)
    REGEL CHAR(60)
    first methode:
    SQL> create index oma_key_001(knr,projnr,actnr,regelnr);
    Index created.
    SQL> select /*+ index_asc(omschact oma_key_001) */ * from omschact where
    2 (knr > 100 or
    3 (knr = 100 and projnr > 30) or
    4 (knr = 100 and projnr = 30 and actnr > 100000) or
    5 (knr = 100 and projnr = 30 and actnr = 100000 and regelnr >= 0));
    Execution Plan
    Plan hash value: 1117430516
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 11M| 822M| 192K (1)| 00:38:26 |
    | 1 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 11M| 822M| 192K (1)| 00:38:26 |
    |* 2 | INDEX FULL SCAN | OMA_KEY_001 | 11M| | 34030 (1)| 00:06:49 |
    Predicate Information (identified by operation id):
    2 - filter("KNR">100 OR "KNR"=100 AND "PROJNR">30 OR "KNR"=100 AND "PROJNR"=30
    AND "ACTNR">100000 OR "ACTNR"=100000 AND "KNR"=100 AND "PROJNR"=30 AND
    "REGELNR">=0)
    second method (same index):
    SQL> select * from (
    2 select /*+ index_asc(omschact oma_key_001) */ * from omschact where knr > 100
    3 union all
    4 select /*+ index_asc(omschact oma_key_001) */ * from omschact where knr = 100 and projnr > 30
    5 union all
    6 select /*+ index_asc(omschact oma_key_001) */ * from omschact where knr = 100 and projnr = 30 and actnr > 100000
    7 union all
    8 select /*+ index_asc(omschact oma_key_001) */ * from omschact where knr = 100 and projnr = 30 and actnr = 100000 and regelnr > 0)
    9 order by knr, projnr, actnr, regelnr;
    Execution Plan
    Plan hash value: 292918786
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 11M| 1203M| | 477K (1)| 01:35:31 |
    | 1 | SORT ORDER BY | | 11M| 1203M| 2745M| 477K (1)| 01:35:31 |
    | 2 | VIEW | | 11M| 1203M| | 192K (1)| 00:38:29 |
    | 3 | UNION-ALL | | | | | | |
    | 4 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 11M| 822M| | 192K (1)| 00:38:26 |
    |* 5 | INDEX RANGE SCAN | OMA_KEY_001 | 11M| | | 33966 (1)| 00:06:48 |
    | 6 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 16705 | 1272K| | 294 (1)| 00:00:04 |
    |* 7 | INDEX RANGE SCAN | OMA_KEY_001 | 16705 | | | 54 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 47 | 3666 | | 4 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | OMA_KEY_001 | 47 | | | 3 (0)| 00:00:01 |
    | 10 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 1 | 78 | | 4 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | OMA_KEY_001 | 1 | | | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    5 - access("KNR">100)
    7 - access("KNR"=100 AND "PROJNR">30)
    9 - access("KNR"=100 AND "PROJNR"=30 AND "ACTNR">100000)
    11 - access("KNR"=100 AND "PROJNR"=30 AND "ACTNR"=100000 AND "REGELNR">0)
    third method:
    SQL> create index oma_test(to_char(knr,'00000000')||to_char(projnr,'00000000')||to_char(actnr,'00000000')||to_char(regelnr,'000'));
    Index created.
    SQL> select /*+ index_asc(omschact oma_test) */ * from omschact where
    2 (to_char(knr,'00000000')||to_char(projnr,'00000000')||
    3 to_char(actnr,'00000000')||to_char(regelnr,'000')) >=
    4 (to_char(100,'00000000')||to_char(30,'00000000')||
    5* to_char(100000,'00000000')||to_char(0,'000'))
    Execution Plan
    Plan hash value: 424961364
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 553K| 55M| 1712 (1)| 00:00:21 |
    | 1 | TABLE ACCESS BY INDEX ROWID| OMSCHACT | 553K| 55M| 1712 (1)| 00:00:21 |
    |* 2 | INDEX RANGE SCAN | OMA_TEST | 99543 | | 605 (1)| 00:00:08 |
    Predicate Information (identified by operation id):
    2 - access(TO_CHAR("KNR",'00000000')||TO_CHAR("PROJNR",'00000000')||TO_CHAR("
    ACTNR",'00000000')||TO_CHAR("REGELNR",'000')>=TO_CHAR(100,'00000000')||TO_CHAR(3
    0,'00000000')||TO_CHAR(100000,'00000000')||TO_CHAR(0,'000'))

  • Exclude duplicate values on SQL where clause statement

    Hi!
    Are some posibilities to exclude duplicate values do not using sql aggregate functions in main select statement?
    Priview SQL statement
    SELECT * FROM
    select id,hin_id,name,code,valid_date_from,valid_date_to
    from diaries
    QRSLT
    WHERE (hin_id = (SELECT NVL(historic_id,id)FROM tutions where id=/*???*/ 59615))
    AND NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy')) <= (SELECT NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy'))FROM tutions where id= /*???*/ 59615)
    AND trunc(valid_date_from) >=(SELECT trunc(valid_date_from)FROM tutions where id= /*???*/ 59615)
    The result
    ID                  HIN_ID    NAME      CODE    VALID_DATE FROM                 VALID_DATE_TO    
    50512
    59564
    RE TU
    01
    07.06.2013 16:32:15
    07.06.2013 16:33:28
    50513
    59564
    TT2 
    02
    07.06.2013 16:33:23
    07.06.2013 16:33:28
    50515
    59564
    TT2 
    02
    07.06.2013 16:33:28
    07.06.2013 16:34:42
    50516
    59564
    ROD 
    03
    07.06.2013 16:34:37
    07.06.2013 16:34:42
    VALID_DATE_TO & AND VALID_DATE_FROM tutions
    07.06.2013 16:34:42
    15.07.2013 10:33:23
    In this case i got duplicate of entry TT2 id 50513  In main select statement cant use agregate functions are even posible to exclude this value from result modifying only the QLRST WHERE clause (TRUNC need to be here)
    THANKS FOR ANY TIP !
    ID.

    Hi, Ok this is working in this case
    SELECT * FROM
    select id,hin_id,name,code,valid_date_from,valid_date_to
    from diaries ahs
    QRSLT
    WHERE (hin_id = (SELECT NVL(historic_id,id)FROM aip_healthcare_tutions where id=/*???*/ 59615))
    AND NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy')) <= (SELECT NVL(valid_date_to,to_date('22.12.2999','dd.mm.yyyy'))FROM tutions where id= /*???*/ 59615)
    AND trunc(valid_date_from) >=(SELECT trunc(valid_date_from)FROM tutions where id= /*???*/ 59615)
    AND NOT  EXISTS
    (SELECT null FROM diaries ahs WHERE  ahs.valid_date_from < QRSLT.valid_date_from
        AND QRSLT.hin_id=ahs.hin_id
        AND QRSLT.code=ahs.code);
    Result
    50512
    59564
    RE TU
    01
    07.06.2013 16:32:15
    07.06.2013 16:33:28
    50513
    59564
    TT2 
    02
    07.06.2013 16:33:23
    07.06.2013 16:33:28
    50516
    59564
    ROD 
    03
    07.06.2013 16:34:37
    07.06.2013 16:34:42
    But if the Data in tutions row  are theese(valid_date_to-null)  then NO ROWS are returning and its logical because in full result list Valid_date_from column are logical incorect
    valid_date_from                                 valid_date_to
    15.07.2013 10:33:23
    NULL
    ID                  HIN_ID    NAME      CODE    VALID_DATE FROM                 VALID_DATE_TO 
    50510
    59564
    RE TU
    01
    07.06.2013 16:33:28
    50511
    59564
    TT2 
    02
    07.06.2013 16:34:41
    50514
    59564
    ROD 
    03
    07.06.2013 16:34:41
    50520
    59564
    Params
    04
    03.07.2013 21:01:30
    50512
    59564
    RE TU
    01
    07.06.2013 16:32:15
    07.06.2013 16:33:28
    50513
    59564
    TT2 
    02
    07.06.2013 16:33:23
    07.06.2013 16:33:28
    50515
    59564
    TT2 
    02
    07.06.2013 16:33:28
    07.06.2013 16:34:42
    50516
    59564
    ROD 
    03
    07.06.2013 16:34:37
    07.06.2013 16:34:42
    Are that posible modifying where statement  if the valid_date_to in tutions are null then theese records where in diary valid_date_to is null is correct to, but need to stay previos logic
    D                  HIN_ID    NAME      CODE    VALID_DATE FROM                 VALID_DATE_TO 
    50510
    59564
    RE TU
    01
    07.06.2013 16:33:28
    null
    50511
    59564
    TT2 
    02
    07.06.2013 16:34:41
    null
    50514
    59564
    ROD 
    03
    07.06.2013 16:34:41
    null
    50520
    59564
    Params
    04
    03.07.2013 21:01:30
    null
    Thanks !
    ID.

Maybe you are looking for

  • What's with disk errors that disappear, sort of?

    1. MB started running slow. Activity monitor showed high CPU use when nothing was running. 2. Ran Disk Utility and it reported unfixable errors. Suggested wiping, reinstalling o/s and restoring from Time Machine 3. MB would then not boot. 4. Time pas

  • Final Cut Express keeps stopping and saying "Too many files open"

    Final Cut Express keeps stopping and saying "Too many files open". I don't see how this is the case as I only have the one project open...

  • File to IDOC ( need advise on how to map file fields )

    Hello friends, I spent quite some time reading all the helpful blogs and threads regarding File to Idoc scenario. However I had a very basic question ( maybe its trivial as I am just new to XI). In my scenario I have Bank Master data (in a CSV file)

  • Extending a component which already extends a spark list ItemRenderer

    Hello everyone, I have the following situation: Im displaying lists of very similar data objects (they extend the same parent) so in order to avoid a lot of changes to many itemrenderers (if i need to change something in the common properties) when d

  • How was this done?

    see the effect on the name I'm referring to the name coming in on the top left, you will see it skewed then swing out like a gate and stop I tried every which way to skew and scale to get a transition to look like this and I find that the skew allows