ROWNUM optimization?

I'm not sure if I'm completely missing something in the process below in using ROWNUM to perform a top-N query. I have created indexes that I would have expected to avoid a SORT from being performed, but I can't seem to get the query to avoid the "SORT ORDER BY STOPKEY".
The database is 11gR2 (11.2.0.2) running on Oracle Linux 5.7.
create table projrecord (
recid          number,
loaddate     date,
comments     varchar2(20));
create table customer (
recid          number,
loaddate     date,
custval          varchar2(20));
insert into projrecord values (1, to_date('03-MAY-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'TEST');
insert into projrecord values (2, to_date('10-JUN-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'TEST');
insert into projrecord values (3, to_date('20-AUG-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'TEST');
insert into projrecord values (4, to_date('25-SEP-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'TEST');
insert into projrecord values (5, to_date('12-OCT-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'TEST');
insert into customer values (1, to_date('03-MAY-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'customer1');
insert into customer values (1, to_date('03-MAY-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'customer1');
insert into customer values (1, to_date('03-MAY-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'customer2');
insert into customer values (3, to_date('20-AUG-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'customer3');
insert into customer values (4, to_date('25-SEP-12 02:10:00', 'DD-MON-YY HH24:MI:SS'), 'customer4');
create unique index idx_projrecord_recid on projrecord (recid);
alter table projrecord add constraint projrecord_recid_pk primary key (recid)
using index idx_projrecord_recid;
alter table customer add constraint customer_recid_fk foreign key (recid)
references projrecord;
create unique index idx_projrecord_reciddate on projrecord (recid, loaddate);
create index idx_cust_lcustvalreciddate on customer (lower(custval), recid, loaddate);
exec dbms_stats.gather_table_stats(user, 'projrecord');
exec dbms_stats.gather_table_stats(user, 'customer');
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> sho parameter optimizer
NAME                                 TYPE        VALUE
optimizer_capture_sql_plan_baselines boolean     FALSE
optimizer_dynamic_sampling           integer     2
optimizer_features_enable            string      11.2.0.2
optimizer_index_caching              integer     0
optimizer_index_cost_adj             integer     100
optimizer_mode                       string      ALL_ROWS
optimizer_secure_view_merging        boolean     TRUE
optimizer_use_invisible_indexes      boolean     FALSE
optimizer_use_pending_statistics     boolean     FALSE
optimizer_use_sql_plan_baselines     boolean     TRUE
SQL> select /*+ gather_plan_statistics */ *
  2  from (
  3     select pr.recid, pr.loaddate, pr.comments
  4     from projrecord pr, customer c
  5     where pr.recid = c.recid
  6     and pr.loaddate = c.loaddate
  7     and lower(c.custval) = 'customer1'
  8     order by pr.loaddate desc)
  9  where rownum < 2;
     RECID LOADDATE  COMMENTS
         1 03-MAY-12 TEST
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));
PLAN_TABLE_OUTPUT
SQL_ID  4rnatds0f7wr0, child number 0
select /*+ gather_plan_statistics */ * from (  select pr.recid,
pr.loaddate, pr.comments  from projrecord pr, customer c  where
pr.recid = c.recid  and pr.loaddate = c.loaddate  and lower(c.custval)
= 'customer1'  order by pr.loaddate desc) where rownum < 2
Plan hash value: 706850593
| Id  | Operation                       | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers|  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT                |                            |      1 |        |      1 |00:00:00.01 |       5|       |       |          |
|*  1 |  COUNT STOPKEY                  |                            |      1 |        |      1 |00:00:00.01 |       5|       |       |          |
|   2 |   VIEW                          |                            |      1 |      1 |      1 |00:00:00.01 |       5|       |       |          |
|*  3 |    SORT ORDER BY STOPKEY        |                            |      1 |      1 |      1 |00:00:00.01 |       5|  2048 |  2048 | 2048  (0)|
|   4 |     NESTED LOOPS                |                            |      1 |        |      2 |00:00:00.01 |       5|       |       |          |
|   5 |      NESTED LOOPS               |                            |      1 |      1 |      2 |00:00:00.01 |       3|       |       |          |
|*  6 |       INDEX RANGE SCAN          | IDX_CUST_LCUSTVALRECIDDATE |      1 |      1 |      2 |00:00:00.01 |       1|       |       |          |
|*  7 |       INDEX UNIQUE SCAN         | IDX_PROJRECORD_RECIDDATE   |      2 |      1 |      2 |00:00:00.01 |       2|       |       |          |
|   8 |      TABLE ACCESS BY INDEX ROWID| PROJRECORD                 |      2 |      1 |      2 |00:00:00.01 |       2|       |       |          |
Predicate Information (identified by operation id):
   1 - filter(ROWNUM<2)
   3 - filter(ROWNUM<2)
   6 - access("C"."SYS_NC00004$"='customer1')
   7 - access("PR"."RECID"="C"."RECID" AND "PR"."LOADDATE"="C"."LOADDATE")
31 rows selected.I saw a note on Metalink (833286.1) that addressed slow ROWNUM queries, however the parameter "_optimizer_rownum_pred_based_fkr=FALSE" it suggested did not change the query plan.
I have also tried to use ROW_NUMBER, but I still get a SORT with "WINDOW SORT PUSHED RANK".
Can anyone explain why I can't seem to avoid the SORT?
Thanks!

user109389 wrote:
Can anyone explain why I can't seem to avoid the SORT?And how do you invision this without a sort? Look at the WHERE clause. Condition lower(c.custval) = 'customer1' allows to use FBI idx_cust_lcustvalreciddate. So optimizer uses INDEX RANGE SCAN. Now we can have MULTIPLE rows. For each of them FBI idx_cust_lcustvalreciddate provides both recid and loaddate, so we have enough info to use INDEX UNIQUE SCAN on idx_projrecord_reciddate. But nevertheless we still deal with possible MULTIPLE rows. So how could you avoid SORT? And SORT is optimized. SORT ORDER BY STOPKEY will stop as soon as first row is found. It will not continue to sort the rest.
SY.

Similar Messages

  • Optimizer choosing different plans when ROWNUM filter. [UPDATED: 11.2.0.1]

    I'm having a couple of issues with a query, and I can't figure out the best way to reach a solution.
    Platform Information
    Windows Server 2003 R2
    Oracle 10.2.0.4
    Optimizer Settings
    SQL > show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     90
    optimizer_index_cost_adj             integer     30
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUEThe query below, is a simple "Top N" query, where the top result is returned. Here it is, with bind variables in the same location as the application code:
    SELECT     PRODUCT_DESC
    FROM
         SELECT     PRODUCT_DESC
         ,     COUNT(*)     AS CNT
         FROM     USER_VISITS     
         JOIN     PRODUCT     ON PRODUCT.PRODUCT_OID = USER_VISITS.PRODUCT_OID
         WHERE     PRODUCT.PRODUCT_DESC != 'Home'     
         AND     VISIT_DATE
              BETWEEN
                   ADD_MONTHS                    
                        TRUNC                    
                             TO_DATE               
                                  :vCurrentYear
                             ,     'YYYY'
                        ,     'YEAR'
                   ,     3*(:vCurrentQuarter-1)
              AND
                   ADD_MONTHS                    
                        TRUNC                    
                             TO_DATE               
                                  :vCurrentYear
                             ,     'YYYY'
                        ,     'YEAR'
                   ,     3*:vCurrentQuarter
                   ) - INTERVAL '1' DAY               
         GROUP BY PRODUCT_DESC
         ORDER BY CNT DESC
    WHERE     ROWNUM <= 1;
    Explain Plan
    The explain plan I receive when running the query above.
    | Id  | Operation                         | Name                          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |*  1 |  COUNT STOPKEY                    |                               |      1 |        |      1 |00:00:34.92 |   66343 |       |       |          |
    |   2 |   VIEW                            |                               |      1 |      1 |      1 |00:00:34.92 |   66343 |       |       |          |
    |*  3 |    FILTER                         |                               |      1 |        |      1 |00:00:34.92 |   66343 |       |       |          |
    |   4 |     SORT ORDER BY                 |                               |      1 |      1 |      1 |00:00:34.92 |   66343 |  2048 |  2048 | 2048  (0)|
    |   5 |      SORT GROUP BY NOSORT         |                               |      1 |      1 |     27 |00:00:34.92 |   66343 |       |       |          |
    |   6 |       NESTED LOOPS                |                               |      1 |      2 |  12711 |00:00:34.90 |   66343 |       |       |          |
    |   7 |        TABLE ACCESS BY INDEX ROWID| PRODUCT                       |      1 |     74 |     77 |00:00:00.01 |      44 |       |       |          |
    |*  8 |         INDEX FULL SCAN           | PRODUCT_PRODDESCHAND_UNQ      |      1 |      1 |     77 |00:00:00.01 |       1 |       |       |          |
    |*  9 |        INDEX FULL SCAN            | USER_VISITS#PK                |     77 |      2 |  12711 |00:00:34.88 |   66299 |       |       |          |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1)
       3 - filter(ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1))<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURR
                  ENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
       8 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
       9 - access("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
                  "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID" AND "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY')
                  ,'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
           filter(("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
                  "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2)
                  TO SECOND(0) AND "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID"))
    Row Source Generation
    TKPROF Row Source Generation
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          0          0           0
    Fetch        2     35.10      35.13          0      66343          0           1
    total        4     35.10      35.14          0      66343          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62 
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=66343 pr=0 pw=0 time=35132008 us)
          1   VIEW  (cr=66343 pr=0 pw=0 time=35131996 us)
          1    FILTER  (cr=66343 pr=0 pw=0 time=35131991 us)
          1     SORT ORDER BY (cr=66343 pr=0 pw=0 time=35131936 us)
         27      SORT GROUP BY NOSORT (cr=66343 pr=0 pw=0 time=14476309 us)
      12711       NESTED LOOPS  (cr=66343 pr=0 pw=0 time=22921810 us)
         77        TABLE ACCESS BY INDEX ROWID PRODUCT (cr=44 pr=0 pw=0 time=3674 us)
         77         INDEX FULL SCAN PRODUCT_PRODDESCHAND_UNQ (cr=1 pr=0 pw=0 time=827 us)(object id 52355)
      12711        INDEX FULL SCAN USER_VISITS#PK (cr=66299 pr=0 pw=0 time=44083746 us)(object id 52949)However when I run the query with an ALL_ROWS hint I receive this explain plan (reasoning for this can be found here Jonathan's Lewis' response: http://www.freelists.org/post/oracle-l/ORDER-BY-and-first-rows-10-madness,4):
    | Id  | Operation                  | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT           |                |     1 |    39 |   223  (25)| 00:00:03 |
    |*  1 |  COUNT STOPKEY             |                |       |       |            |          |
    |   2 |   VIEW                     |                |     1 |    39 |   223  (25)| 00:00:03 |
    |*  3 |    FILTER                  |                |       |       |            |          |
    |   4 |     SORT ORDER BY          |                |     1 |    49 |   223  (25)| 00:00:03 |
    |   5 |      HASH GROUP BY         |                |     1 |    49 |   223  (25)| 00:00:03 |
    |*  6 |       HASH JOIN            |                |   490 | 24010 |   222  (24)| 00:00:03 |
    |*  7 |        TABLE ACCESS FULL   | PRODUCT   |    77 |  2849 |     2   (0)| 00:00:01 |
    |*  8 |        INDEX FAST FULL SCAN| USER_VISITS#PK |   490 |  5880 |   219  (24)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1)
       3 - filter(ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*(TO_NUMBER(:
                  VCURRENTQUARTER)-1))<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*TO_N
                  UMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
       6 - access("USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID")
       7 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
       8 - filter("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYY
                  Y'),'fmyear'),3*(TO_NUMBER(:VCURRENTQUARTER)-1)) AND
                  "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),
                  3*TO_NUMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))And the TKPROF Row Source Generation:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        3      0.51       0.51          0        907          0          27
    total        5      0.51       0.51          0        907          0          27
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62 
    Rows     Row Source Operation
         27  FILTER  (cr=907 pr=0 pw=0 time=513472 us)
         27   SORT ORDER BY (cr=907 pr=0 pw=0 time=513414 us)
         27    HASH GROUP BY (cr=907 pr=0 pw=0 time=512919 us)
      12711     HASH JOIN  (cr=907 pr=0 pw=0 time=641130 us)
         77      TABLE ACCESS FULL PRODUCT (cr=5 pr=0 pw=0 time=249 us)
      22844      INDEX FAST FULL SCAN USER_VISITS#PK (cr=902 pr=0 pw=0 time=300356 us)(object id 52949)The query with the ALL_ROWS hint returns data instantly, while the other one takes about 70 times as long.
    Interestingly enough BOTH queries generate plans with estimates that are WAY off. The first plan is estimating 2 rows, while the second plan is estimating 490 rows. However the real number of rows is correctly reported in the Row Source Generation as 12711 (after the join operation).
    TABLE_NAME                       NUM_ROWS     BLOCKS
    USER_VISITS                        196044       1049
    INDEX_NAME                         BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR LAST_ANALYZED
    USER_VISITS#PK                          2         860        196002          57761 07/24/2009 13:17:59
    COLUMN_NAME                    NUM_DISTINCT LOW_VALUE            HIGH_VALUE                                 DENSITY     NUM_NULLS HISTOGRAM
    VISIT_DATE                           195900 786809010E0910       786D0609111328                      .0000051046452272          0 NONEI don't know how the first one is estimating 2 rows, but I can compute the second's cardinality estimates by assuming a 5% selectivity for the TO_DATE() functions:
    SQL > SELECT ROUND(0.05*0.05*196044) FROM DUAL;
    ROUND(0.05*0.05*196044)
                        490However, removing the bind variables (and clearing the shared pool), does not change the cardinality estimates at all.
    I would like to avoid hinting this plan if possible and that is why I'm looking for advice. I also have a followup question.
    Edited by: Centinul on Sep 20, 2009 4:10 PM
    See my last post for 11.2.0.1 update.

    Centinul wrote:
    You could potentially perform testing with either a CARDINALITY or OPT_ESTIMATE hint to see if the execution plan changes dramatically to improve performance. The question then becomes > whether this be sufficient to over-rule the first rows optimizer so that it does not use an index access which will avoid a sort.I tried doing that this morning by increasing the cardinality from the USER_VISITS table to a value such that the estimate was about that of the real amount of data. However the plan did not change.
    Could you use the ROW_NUMBER analytic function instead of ROWNUMInterestingly enough, when I tried this it generated the same plan as was used with the ALL_ROWS hint, so I may implement this query for now.
    I do have two more followup questions:
    1. Even though a better plan is picked the optimizer estimates are still off by a large margin because of bind variables and 5%* 5% * NUM_ROWS. How do I get the estimates in-line with the actual values? Should I really fudge statistics?
    2. Should I raise a bug report with Oracle over the behavior of the original query?That is great that the ROW_NUMBER analyitic function worked. You may want to perform some testing with this before implementing it in production to see whether Oracle performs significantly more logical or physical I/Os with the ROW_NUMBER analytic function compared to the ROWNUM solution with the ALL_ROWS hint.
    As Timur suggests, seeing a 10053 trace during a hard parse of both queries (with and without the ALL_ROWS hint) would help determine what is happening. It could be that a histogram exists which is feeding bad information to the optimizer, causing distorted cardinality in the plan. If bind peeking is used, the 5% * 5% rule might not apply, especially if a histogram is involved. Also, the WHERE clause includes "PRODUCT.PRODUCT_DESC != 'Home'" which might affect the cardinality in the plan.
    Your question may have prompted the starting of a thread in the SQL forum yesterday on the topic of ROWNUM, but it appears that thread was removed from the forum within the last couple hours.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • ROWNUM is indexed in the Fact table - How to optimize performace with this?

    Hi,
    I have a scenario where there is an index on the Rownum.
    The main Fact table is partitioned based on the job number (Daily and monthly). As there can be multiple entries for a single jobID, the primary key is made up of the Job ID and the Rownum
    This fact table in turn is joined with another fact table based on this job number and rownum. This second fact table is also partitioned on job ID.
    I have few reference tables that are joined with the first fact table with btree index.
    Though in a normal DW scenario we should use bitmap, here we can't do that as lot of other applications are accessing data (DML queries) where bitmap will be slow. So I am using STAR_TRANSFORMATION hint to use the normal index as bitmap index.
    Till here it is fine. Problem is when I simply do a count for a specific partition from a reference table and a fact table, it is using all required indexes as bitmap with very low cost. But also it is using ROWNUM index that is of very very high cost.
    I am relatively new to Oracle tuning. I am not able to understand what it is exactly doing. Could you please suggest if I can get rid of this ROWNUM to make my query performance faster? This index can not be dropped. Is there a way in the hint I can instruct not to use this primary key index?
    Or Even by using is there a way that the performance will be faster?
    I will highly appreciate any help in this regard.
    Regards
    ...

    Just sending the portion having info on the partition and Primary index as the entire script is too big.
    CREATE TABLE FACT_TABLE
    JOBID VARCHAR2(10 BYTE) DEFAULT '00000000' NOT NULL,
    RECID VARCHAR2(18 BYTE) DEFAULT '000000000000000000' NOT NULL,
    REP_DATE VARCHAR2(8 BYTE) DEFAULT '00000000' NOT NULL,
    LOCATION VARCHAR2(4 BYTE) DEFAULT ' ' NOT NULL,
    FUNCTION VARCHAR2(6 BYTE) DEFAULT ' ' NOT NULL,
    AMT.....................................................................................
    TABLESPACE PSAPPOD
    PCTUSED 0
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE (
    INITIAL 32248K
    LOGGING
    PARTITION BY RANGE (JOBID)
    PARTITION FACT_TABLE_1110500 VALUES LESS THAN ('01110600')
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACTTABLED
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE (
    INITIAL 32248K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    BUFFER_POOL DEFAULT
    PARTITION FACT_TABLE_1191800 VALUES LESS THAN ('0119190000')
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACTTABLED
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    CREATE UNIQUE INDEX "FACT_TABLE~0" ON FACT_TABLE
    (JOBID, RECID)
    TABLESPACE PSAPFACT_TABLEI
    INITRANS 2
    MAXTRANS 255
    LOCAL (
    PARTITION FACT_TABLE_11105
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACT_TABLEI
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    BUFFER_POOL DEFAULT
    ......................................................

  • Issue regarding rownum in sql query

    Hi All,
    When I'm running the query below
    select 'OP',
    'ORG_CODE_PROVIDER',
    rownum as ranking,
    x.ORG_CODE_PROVIDER,
    z.description,
    x.value_count,
    round(x.value_count / 200432, 4) * 100 as value_pct,
    NULL as BATCH_KEY,
    '9BED55A4328EFD71E040D20A143245E3' as BATCH_SET_KEY,
    'OVERALL',
    'OVERALL'
    from (select ORG_CODE_PROVIDER, count(*) as value_count
    from STAGING_TST.OP t
    group by ORG_CODE_PROVIDER
    order by count(*) desc, 1 asc) x,
    (select code, description from ref_hd.MV_ORG_CODE_PROVIDER) z
    where z.code(+) = x.ORG_CODE_PROVIDER
    and rownum <= 10
    it is showing me results based on the rownum of block x.
    But when I try to insert these records in a table like
    insert into QA_TST.OP_STAGE_COL_VAL_FREQ
    select 'OP',
    'ORG_CODE_PROVIDER',
    rownum as ranking,
    x.ORG_CODE_PROVIDER,
    z.description,
    x.value_count,
    round(x.value_count / 200432, 4) * 100 as value_pct,
    NULL as BATCH_KEY,
    '9BED55A4328EFD71E040D20A143245E3' as BATCH_SET_KEY,
    'OVERALL',
    'OVERALL'
    from (select ORG_CODE_PROVIDER, count(*) as value_count
    from STAGING_TST.OP t
    group by ORG_CODE_PROVIDER
    order by count(*) desc, 1 asc) x,
    (select code, description from ref_hd.MV_ORG_CODE_PROVIDER) z
    where z.code(+) = x.ORG_CODE_PROVIDER
    and rownum <= 10
    On querying the table I'm getting totally different result based on the rownum governed by block y.
    I could not able to understand why is it happening. Why oracle is not inserting the records that it is showing in select query.
    Moreover, how can I fix this issue and get the desired result.
    Thanks
    Tarun

    Hi,
    Whenever you post any code, indent it so that how it looks on the screen reflects what it is doing. In particular, make it easy to see what are the sub-queries. Whenever you post formatted text (such as query results as well as code) on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
    I originally posted an inaccurate answer becuase I couldn't understand your unformatted code.
    How ROWNUM is assigned in a join depends on how the optimizer chooses to perform the join.  If you want consistent results, then do the join first (in a sub-query), use ORDER BY clause in that sub-query, and use ROWNUM only in the parent query, which should not include a join.
    The analytic ROW_NUMBER function is a lot more powerful and versatile than ROWNUM.  You might look into using it (though the extra power may not be needed in this particular problem).
    Edited by: Frank Kulash on Feb 10, 2011 11:29 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to optimize the performance of crystal report?

    Hi,
    -I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
    -Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
    -Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
    If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
    Now with the above mentioned points I tried to optimize the report in following way.
    -Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
    To solve this problem I used command object and put the charts in the subreport and detail in main report.
    In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
    -To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
    -For time Field I have created two sub reports, one for chart and other one for details in the same way described in point one(Printing 1 million recordsu2026u2026).
    After implementing these points my crystal reports performance improved drastically. The report that was taking 24 minute to come back now taking only 2 minutes.
    However I want my report to come back with one minute. It returns if I remove the sub reports for Time Based Grouping but I can not do so.
    My questions here are,
    -Can I stop Subreport from fetching the data from DB if itu2019s suppressed?
    -I believe using Conditional Template Object is a better option rather than having multiple detailed sections to print the data for a selected Group. However any suggestion here to improve the performance will be appreciable.
    -since crystal report does not provide any option to limit the number of records to be fetched from DB, I am forced to use command object with rownum in where condition.
        Please let me know about other option(s) to get this done...If there is any.
    I am using Crystal report 2008.And we have developed our application the use JRC to export crystal report in PDF.
    Regards,
    Amrita
    Edited by: Amrita Singh on May 12, 2009 11:36 AM

    1) I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
    2) Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
    3) Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
    If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
    Now with the above mentioned points I tried to optimize the report in following way.
    1) Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
    To solve this problem I used command object and put the charts in the subreport and detail in main report.
    In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
    2)To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
    Edited by: Amrita Singh on May 12, 2009 12:26 PM

  • How to optimize massive insert on a table with spatial index ?

    Hello,
    I need to implement a load process for saving up to 20 000 points per minutes in Oracle 10G R2.
    These points represents car locations tracked by GPS and I need to store at least all position from the past 12 hours.
    My problem is that the spatial index is very costly during insert (For the moment I do only insertion).
    My several tries for the insertion by :
    - Java and PreparedStatement.executeBatch
    - Java and generation a SQLLoader file
    - Java and insertion on view with a trigger "instead of"
    give me the same results... (not so good)
    For the moment, I work on : DROP INDEX, INSERT, CREATE INDEX phases.
    But is there a way to only DISABLE INDEX and REBUILD INDEX only for inserted rows ?
    I used the APPEND option for insertion :
    INSERT /*+ APPEND */ INTO MY_TABLE (ID, LOCATION) VALUES (?, MDSYS.SDO_GEOMETRY(2001,NULL,MDSYS.SDO_POINT_TYPE(?, ?, NULL), NULL, NULL))
    My spatial index is created with the following options :
    'sdo_indx_dims=2,layer_gtype=point'
    Is there a way to optimize these heavy load ???
    What about the PARALLEL option and how does it work ? (Not so clear for me regarding the documentation... I am not a DBA)
    Thanks in advanced

    It is possible to insert + commit 20000 points in 16 seconds.
    select * from v$version;
    BANNER                                                                         
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod               
    PL/SQL Release 10.2.0.1.0 - Production                                         
    CORE     10.2.0.1.0     Production                                                     
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production                        
    NLSRTL Version 10.2.0.1.0 - Production                                         
    drop table testpoints;
    create table testpoints
    ( point mdsys.sdo_geometry);
    delete user_sdo_geom_metadata
    where table_name = 'TESTPOINTS'
    and   column_name = 'POINT';
    insert into user_sdo_geom_metadata values
    ('TESTPOINTS'
    ,'POINT'
    ,sdo_dim_array(sdo_dim_element('X',0,1000,0.01),sdo_dim_element('Y',0,1000,0.01))
    ,null)
    create index testpoints_i on testpoints (point)
    indextype is mdsys.spatial_index parameters ('sdo_indx_dims=2,layer_gtype=point');
    insert /*+ append */ into testpoints
    select (sdo_geometry(2001,null,sdo_point_type(1+ rownum / 20, 1 + rownum / 50, null),null,null))
    from all_objects where rownum < 20001;
    Duration: 00:00:10.68 seconds
    commit;
    Duration: 00:00:04.96 seconds
    select count(*) from testpoints;
      COUNT(*)                                                                     
         20000                                                                      The insert of 20 000 rows takes 11 seconds, the commit takes 5 seconds.
    In this example there is no data traffic between the Oracle database and a client but you have 60 -16 = 44 seconds to upload your points into a temporary table. After uploading in a temporary table you can do:
    insert /*+ append */ into testpoints
    select (sdo_geometry(2001,null,sdo_point_type(x,y, null),null,null))
    from temp_table;
    commit;Your insert ..... values is slow, do some bulk processing.
    I think it can be done, my XP computer that runs my database isn't state of the art.

  • Trying to optimize this simple query

    Hi,
    I am trying to optimize this simple query but the two methods I am trying actually make things worse.
    The original query is:
    SELECT customer_number, customer_name
    FROM bsc_pdt_account_mv
    where rownum <= 100
    AND Upper(customer_name) like '%SP%'
    AND customer_id IN
    SELECT cust_id FROM bsc_pdt_assoc_sales_force_mv
    WHERE area_identifier IN (
    SELECT area_identifier FROM bsc_pdt_assoc_sales_force_mv
    WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
    The result set of this query returns me the first 100 rows in 88 seconds and they are all distinct by default (don't know why they are distinct).
    My first attempt was to try to use table joins instead of the IN conditions:
    SELECT
    distinct -- A: I need to use distinct now
    customer_number, customer_name
    FROM bsc_pdt_account_mv pdt,
    bsc_pdt_assoc_sales_force_mv asf,
    SELECT distinct area_identifier FROM bsc_pdt_assoc_sales_force_mv
    WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
    ) area
    where
    area.area_identifier = asf.area_identifier
    AND asf.cust_id = pdt.customer_id
    AND Upper(customer_name) like '%SP%'
    AND rownum <= 100 -- B: strange when I comment this out
    order by 1
    I dont understand two things with this query. First issue, I now need to put in the distinct because the result set is not distinct by default. Second issue (very strange), when I put the rownum condition (<100) I get two rows in 1.5 seconds. If I remove the condition, I get 354 rows (whole result set) in 326 seconds.
    My second attempt was to use EXISTS instead of IN:
    SELECT
    customer_number, customer_name
    FROM bsc_pdt_account_mv pdt
    where Upper(customer_name) like '%SP%'
    AND rownum <= 100
    AND EXISTS
    select 1 from
    bsc_pdt_assoc_sales_force_mv asf,
    SELECT distinct area_identifier FROM bsc_pdt_assoc_sales_force_mv
    WHERE ad_identifier = '90004918' or rm_identifier = '90004918' or tm_identifier = '90004918'
    ) area
    where
    area.area_identifier = asf.area_identifier
    AND asf.cust_id = pdt.customer_id
    This query returns a similar distinct result set as teh original one but takes pretty much the same time (87 seconds).

    The query below hangs when run in TOAD or PL/SQL Dev. I noticed there is no rows returned from the inner table for this condition.
    SELECT customer_number, customer_name
    FROM
    bsc_pdt_account_mv pdt_account
    where rownum <= 100
    AND exists (
    SELECT pdt_sales_force.cust_id
    FROM bsc_pdt_assoc_sales_force_mv pdt_sales_force
    WHERE pdt_account.customer_id = pdt_sales_force.cust_id
    AND (pdt_sales_force.rm_identifier = '90007761' or pdt_sales_force.tm_identifier = '90007761') )
    ORDER BY customer_name
    -- No rows returned by this query
    SELECT pdt_sales_force.cust_id
    FROM bsc_pdt_assoc_sales_force_mv pdt_sales_force
    WHERE pdt_sales_force.rm_identifier = '90007761' or pdt_sales_force.tm_identifier = '90007761'

  • How to optimize Database Calls to improve performance of an application

    Hi,
    I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
    1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
    2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
    This is very important, and any help is greatly appreciated.
    Thanks in Advance,
    Sailatha

    latha_kaps wrote:
    I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
    1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
    2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
    This is very important, and any help is greatly appreciated.1. 2000 records inside a resultset are not a big number.
    2. Which RDBMS you use?
    Concerning the answer to 2. you have different possibilities. The best thing is always to handle as many transactions as possible inside the database. Therefore a stored procedure is the best approach imho.
    Below there is an example for an Oracle RDBMS.
    Assumption #1 you have created an object (demo_obj) in your Oracle database:
    create type demo_obj as object( val1 number, val2 number, val3 number);
    create type demo_array as table of demo_obj;
    /Assumption #2 you've created a stored function to get the values of the array in your database:
    create or replace function f_demo ( p_num number )
    return demo_array
    as
        l_array demo_array := demo_array();
    begin
        select demo_obj(round(dbms_random.value(1,2000)),round(dbms_random.value(2000,3000)),round(dbms_random.value(3000,4000)))
        bulk collect into l_array
          from all_objects
         where rownum <= p_num;
        return l_array;
    end;
    /For getting the data out of database use the following Java program (please watch the comments):
    import java.sql.*;
    import java.io.*;
    import oracle.sql.*;
    import oracle.jdbc.*;
    public class VarrayDemo {
         public static void main(String args[]) throws IOException, SQLException {
              DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
              Connection conn = DriverManager.getConnection(
                        "jdbc:oracle:oci:@TNS_ENTRY_OF_YOUR_DB", "scott", "tiger"); // I am using OCI driver here, but one can use thin driver as well
              conn.setAutoCommit(false);
              Integer numRows = new Integer(args[0]); // variable to accept the number of rows to return (passed at runtime)
              Object attributes[] = new Object[3]; // "attributes" of the "demo_obj" in the database
              // the object demo_obj in the db has 3 fields, all numeric
              // create an array of objects which has 3 attributes
              // we are building a template of that db object
              // the values i pass below are just generic numbers, 1,2,3 mean nothing really
              attributes[0] = new Integer(1);
              attributes[1] = new Integer(2);
              attributes[2] = new Integer(3);
              // this will represent the data type DEMO_OBJ in the database
              Object demo_obj[] = new Object[1];
              // make the connection between oracle <-> jdbc type
              demo_obj[0] = new oracle.sql.STRUCT(new oracle.sql.StructDescriptor(
                        "DEMO_OBJ", conn), conn, attributes);
              // the function returns an array (collection) of the demo_obj
              // make the connection between that array(demo_array) and a jdbc array
              oracle.sql.ARRAY demo_array = new oracle.sql.ARRAY(
                        new oracle.sql.ArrayDescriptor("DEMO_ARRAY", conn), conn,
                        demo_obj);
              // call the plsql function
              OracleCallableStatement cs =
                   (OracleCallableStatement) conn.prepareCall("BEGIN ? := F_DEMO(?);END;");
              // bind variables
              cs.registerOutParameter(1, OracleTypes.ARRAY, "DEMO_ARRAY");
              cs.setInt(2, numRows.intValue());
              cs.execute();
              // get the results of the oracle array into a local jdbc array
              oracle.sql.ARRAY results = (oracle.sql.ARRAY) cs.getArray(1);
              // flip it into a result set
              ResultSet rs = results.getResultSet();
              // process the result set
              while (rs.next()) {
                   // since it's an array of objects, get and display the value of the underlying object
                   oracle.sql.STRUCT obj = (STRUCT) rs.getObject(2);
                   Object vals[] = obj.getAttributes();
                   System.out.println(vals[0] + " " + vals[1] + " " + vals[2]);
              // cleanup
              cs.close();
              conn.close();
    }For selecting 20.000 records it takes only a few seconds.
    Hth

  • Optimizer parameters different in 10053 trace

    Hello,
    The optimizer settings and the ones reported in the 10053 trace does not match. Is this a known issue ? Version is printed in the code snippet.
    Here, optimizer_mode is set to ALL_ROWS, but 10053 trace reports this as first_rows_100. Similarly, optimizer_index_cost_adj is 1. But, it is 25 in the trace.
    The query is not using hints.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options
    SQL> show parameter opti
    NAME                                 TYPE        VALUE
    filesystemio_options                 string      none
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.3
    optimizer_index_caching              integer     100
    optimizer_index_cost_adj             integer     1
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    SQL>Contents of 10053 trace
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      sort_area_retained_size             = 65535
      optimizer_mode                      = first_rows_100
      optimizer_index_cost_adj            = 25
      optimizer_index_caching             = 100
      *********************************I can see the same used in here..
    Content of other_xml column
    ===========================
      db_version     : 10.2.0.3
      parse_schema   : COT_PLUS
      plan_hash      : 733167152
      Outline Data:
      /*+
        BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
          OPT_PARAM('optimizer_index_cost_adj' 25)
          OPT_PARAM('optimizer_index_caching' 100)
          FIRST_ROWS(100)
          OUTLINE_LEAF(@"SEL$5DA710D3")
          UNNEST(@"SEL$2")
          OUTLINE(@"SEL$1")
          OUTLINE(@"SEL$2")
          FULL(@"SEL$5DA710D3" "CDW"@"SEL$1")
          INDEX_RS_ASC(@"SEL$5DA710D3" "O"@"SEL$2" ("ORDERS"."STATUS_ID"))
          LEADING(@"SEL$5DA710D3" "CDW"@"SEL$1" "O"@"SEL$2")
          USE_NL(@"SEL$5DA710D3" "O"@"SEL$2")
        END_OUTLINE_DATA
      */Rgds,
    Gokul
    Edited by: Gokul Gopal on 13-Jun-2012 03:14

    Gokul,
    Please report the output of the following, which checks the V$SES_OPTIMIZER_ENV view for the current session:
    SELECT
      NAME,
      VALUE,
      ISDEFAULT
    FROM
      V$SES_OPTIMIZER_ENV
    WHERE
      SID=(SELECT SID FROM V$MYSTAT WHERE ROWNUM=1)
      AND NAME IN ('optimizer_mode','optimizer_index_cost_adj','optimizer_index_caching')
    ORDER BY
      NAME;In the same session, execute the following (your SQL statement with 1=1 added in the WHERE clause to produce a hard parse):
    ALTER SESSION SET TRACEFILE_IDENTIFIER='OPTIMIZER_TEST';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    select * from A where 1=1 AND col1 = (select to_char(col1) from B where status in (16,12,22));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';Take a look in the generated 10053 trace file. Are the values for the optimizer_mode, optimizer_index_cost_adj, and optimizer_index_caching found in the OPTIMIZER_TEST 10053 trace file the same as what was produced by the above select from V$SES_OPTIMIZER_ENV?
    Charles Hooper
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • USING ROWNUM in SQL

    Hi
    I have a SQL query which a few nested queries. However, what I would like to do when each query is executed, is to count the number of rows returned in each query with rownum, and return this result up to the next query and so forth. How can I do this? I have had a go, but I can't seem to get it to work.
    Thanks

    Ganesh Srivatsav wrote:
    >
    more readable construction is count(*). It is not recommended to substitute arbitrary literals.
    >
    oracle documentation clearly says you can use count(expression). As long as the expression is not null it counts it.You can... but... count(1) doesn't make as much sense when reading the code as count(*) does. Count(*) sort of intimates a "count everything", whereas count(1) intimates "count 1 thing" even though it really counts everything.
    Even Oracle itself rewrites count(1) as count(*) internally, so why make it have to do that step? Why not just provide it with what it wants in the first place?
    {message:id=9360008}
    Anyways Oracle will take care of it ;-) . And I also think that with all new versions and features coming in, Optimizer is becoming more and more intelligent.
    I am sure the term standard will easily subside over years.Erm... no. Standards are standards. The standard is to use "count(*)". Count(1) is left over from those days when people believed that count(1) was somehow faster than count(*), which it isn't.

  • Why optimizer prefers nested loop over hash join?

    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
          AND ROWNUM < 2tkprof output is:
    Production
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          4           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1    228.83     223.48          0    4264533          0           1
    total        3    228.84     223.50          0    4264537          4           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 108  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=4264533 pr=0 pw=0 time=223484076 us)
          1   NESTED LOOPS  (cr=4264533 pr=0 pw=0 time=223484031 us)
      10401    TABLE ACCESS FULL T1 (cr=192 pr=0 pw=0 time=228969 us)
          1    TABLE ACCESS FULL T2 (cr=4264341 pr=0 pw=0 time=223182508 us)Development
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1      0.05       0.03          0        512          0           1
    total        3      0.06       0.06          0        516          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 113  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=512 pr=0 pw=0 time=38876 us)
          1   HASH JOIN  (cr=512 pr=0 pw=0 time=38846 us)
         51    TABLE ACCESS FULL T2 (cr=492 pr=0 pw=0 time=30230 us)
        861    TABLE ACCESS FULL T1 (cr=20 pr=0 pw=0 time=2746 us)

    sanjeevchauhan wrote:
    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
    FROM t1 p, t2 d
    WHERE d.emplid = p.id_psoft
    AND p.flag_processed = 'N'
    AND p.desc_pool = :b1
    AND NOT d.name LIKE '%DUPLICATE%'
    AND ROWNUM < 2
    You've got already some suggestions, but the most straightforward way is to run the unhinted statement in both environments and then force the join and access methods you would like to see using hints, in your case probably "USE_HASH(P D)" in your production environment and "FULL(P) FULL(D) USE_NL(P D)" in your development environment should be sufficient to see the costs and estimates returned by the optimizer when using the alternate access and join patterns.
    This give you a first indication why the optimizer thinks that the chosen access path seems to be cheaper than the obviously less efficient plan selected in production.
    As already mentioned by Hemant using bind variables complicates things a bit since EXPLAIN PLAN is not reliable due to bind variable peeking performed when executing the statement, but not when explaining.
    Since you're already on 10g you can get the actual execution plan used for all four variants using DBMS_XPLAN.DISPLAY_CURSOR which tells you more than the TKPROF output in the "Row Source Operation" section regarding the estimates and costs assigned.
    Of course the result of your whole exercise might be highly dependent on the actual bind variable value used.
    By the way, your statement is questionable in principle since you're querying for the first row of an indeterministic result set. It's not deterministic since you've defined no particular order so depending on the way Oracle executes the statement and the physical storage of your data this query might return different results on different runs.
    This is either an indication of a bad design (If the query is supposed to return exactly one row then you don't need the ROWNUM restriction) or an incorrect attempt of a Top 1 query which requires you to specify somehow an order, either by adding a ORDER BY to the statement and wrapping it into an inline view, or e.g. using some analytic functions that allow you specify a RANK by a defined ORDER.
    This is an example of how a deterministic Top N query could look like:
    SELECT
    FROM
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
    ORDER BY <order_criteria>
    WHERE ROWNUM <= 1;Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Optimize query

    Hi All,
    Could you please tell me why this query is taking more time in a package where it is define under a item procedure, but when I not using this in my package which is a outbound taking data from oracle to staging table, this query execute fast and in 40 min I am getting more then 3 lakh+ records in the staging table but when I am include below code it taking more the 12 hours to fatch the records from oracle to staging table. please help how to optimize this query as I have created indexs for all the table.
    SELECT distinct substr(TRIM(fds.short_text),instr(fds.short_text,'Rev:',1)+5,instr(fds.short_text,'UOM:',1)-instr(fds.short_text,'Rev:',1)-5)
    INTO v_Drawing_Rev
    FROM fnd_documents_tl fdt
    ,FND_DOCUMENTS_SHORT_TEXT fds
    ,fnd_attached_documents fad
    WHERE 1=1
    --AND (fdt.MEDIA_ID=14173223)
    AND fdt.language = 'US'
    AND fdt.MEDIA_ID = fds.MEDIA_ID
    AND fad.pk2_value = r_get_data(i).inventory_item_id --to_char(msi.inventory_item_id)
    AND fad.pk1_value = (SELECT to_char(organization_id)
    FROM org_organization_definitions
    WHERE organization_code = 'FLS')
    AND fad.document_id = fdt.document_id
    AND 'MTL_SYSTEM_ITEMS' = fad.ENTITY_NAME
    AND ROWNUM =1;
    EXCEPTION
    WHEN OTHERS THEN
    v_Drawing_Rev := NULL;
    END;
    Edited by: user605933 on Sep 23, 2010 7:12 AM

    Pl see these links on how to post a tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long ...
    HTH
    Srini

  • Optimizer not using indexes

    DBAs,
    I have a select query which is using index scan when quired in prod. database and is executing in 20secs.and is using full table scan in non prod. db and is taking 48 secs.I rebuilded indexes & took stats in non-prod db but even it is taking 47 secs.
    Please advice......

    Here are the details
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS( -ownname => 'TCD_PRD_STG', -
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
    method_opt => 'for all columns size AUTO' -
    SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (‘JOE’,’EMPLOYEE’);
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('TCD_PRD_STG',DBMS_STATS.AUTO_SAMPLE_SIZE);
    1)Oracle versions are 10.2.0.2 in both prod & non-prod.
    2)Explain plan of prod. db
    SQL> SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    YPE=16 AND ( ( ( SCHEMA_ID=31 ) ) AND ( ( (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM
    ( SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHE
    RE KEYWORD IN ('Africa') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) t
    empselectholder WHERE tempkeywordcount=1)) OR (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM (
    SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHER
    E KEYWORD IN ('Aig') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) temps
    electholder WHERE tempkeywordcount=1)) ) ) ) ORDER BY LAST_PUBLISHED_DATE DESC) WHERE ROWNUM<51;
    no rows selected
    Elapsed: 00:00:21.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=192 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=192 Card=79 Bytes=1027)
    3 2 SORT (ORDER BY STOPKEY) (Cost=192 Card=79 Bytes=92272)
    4 3 HASH (UNIQUE) (Cost=191 Card=79 Bytes=92272)
    5 4 FILTER
    6 5 TABLE ACCESS (BY INDEX ROWID) OF 'ITEMS' (TABLE)
    (Cost=190 Card=808 Bytes=943744)
    7 6 INDEX (RANGE SCAN) OF 'IDX_ITEMS_PUB_URL' (IND
    EX) (Cost=107 Card=17024)
    8 5 FILTER
    9 8 HASH (GROUP BY) (Cost=42 Card=1 Bytes=540)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=41 Card=1 Bytes=540)
    11 10 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=35 Card=7403)
    12 5 FILTER
    13 12 HASH (GROUP BY) (Cost=3 Card=1 Bytes=540)
    14 13 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=2 Card=1 Bytes=540)
    15 14 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=1 Card=50)
    Statistics
    21 recursive calls
    0 db block gets
    4950582 consistent gets
    4060 physical reads
    13100 redo size
    240 bytes sent via SQL*Net to client
    333 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    explain plan of non-prod db
    1* SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    SQL> /
    ITEM_REFERENCE_ID
    96672
    96680
    Elapsed: 00:00:47.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=502 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=502 Card=255 Bytes=3315)
    3 2 SORT (ORDER BY STOPKEY) (Cost=502 Card=255 Bytes=40035
    4 3 HASH (UNIQUE) (Cost=501 Card=255 Bytes=40035)
    5 4 FILTER
    6 5 TABLE ACCESS (FULL) OF 'ITEMS' (TABLE) (Cost=500
    Card=2618 Bytes=411026)
    7 5 FILTER
    8 7 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    9 8 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=11 Bytes=319)
    10 5 FILTER
    11 10 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    12 11 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=1 Bytes=29)
    Statistics
    0 recursive calls
    0 db block gets
    5912606 consistent gets
    0 physical reads
    0 redo size
    387 bytes sent via SQL*Net to client
    435 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    2 rows processed

  • Oracle rownum wrong explain plan

    SCOTT@oracle10g>create table t as select * from dba_objects;
    Table created.
    SCOTT@oracle10g>alter table t modify CREATED date not null;
    Table altered.
    SCOTT@oracle10g>insert into t select * from t;
    50416 rows created.
    SCOTT@oracle10g>insert into t select * from t;
    100832 rows created.
    SCOTT@oracle10g>insert into t select * from t;
    201664 rows created.
    SCOTT@oracle10g>commit;
    Commit complete.
    SCOTT@oracle10g>create index t_created on t(created) nologging;
    Index created.
    SCOTT@oracle10g>select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    PL/SQL Release 10.2.0.3.0 - Production
    CORE    10.2.0.3.0      Production
    TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    SCOTT@oracle10g>set autot trace
    SCOTT@oracle10g>select t.owner,t.object_name   from
      2  (select rid from (
      3  select rownum rn,rid from
      4  (select rowid rid from t order by created)
      5  where rownum<100035)
      6  where rn>100000) h, t
      7  where t.rowid=h.rid;
    34 rows selected.
    Execution Plan
    Plan hash value: 3449471415
    | Id  | Operation           | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT    |           |   100K|    11M|       |  4776   (2)| 0
    0:00:58 |
    |*  1 |  HASH JOIN          |           |   100K|    11M|  3616K|  4776   (2)| 0
    0:00:58 |
    |*  2 |   VIEW              |           |   100K|  2442K|       |  1116   (2)| 0
    0:00:14 |
    |*  3 |    COUNT STOPKEY    |           |       |       |       |            |
            |
    |   4 |     VIEW            |           |   440K|  5157K|       |  1116   (2)| 0
    0:00:14 |
    |   5 |      INDEX FULL SCAN| T_CREATED |   440K|  9024K|       |  1116   (2)| 0
    0:00:14 |
    |   6 |   TABLE ACCESS FULL | T         |   440K|    39M|       |  1237   (2)| 0
    0:00:15 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="RID")
       2 - filter("RN">100000)
       3 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
           5814  consistent gets
              0  physical reads
              0  redo size
           1588  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed   
    here ,oracle don't choose the best explain plan ,I think becase  oracle compute cadinality 100k ,so it don't choose nest loop,why oracle can't compute cardinality 35 here ??
    |*  2 |   VIEW              |           |   100K|  2442K|       |  1116   (2)| 0
    SCOTT@oracle10g>select  t.owner,t.object_name   from t where rowid in
      2      (select rid from (
      3      select rownum rn,rid from
      4      (select rowid rid from t order by created)
      5      where rownum<100035)
      6      where rn>100000)
      7 
    SCOTT@oracle10g>/
    34 rows selected.
    Execution Plan
    Plan hash value: 1566335206
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT            |           |     1 |   107 |  1586   (2)| 0
    0:00:20 |
    |   1 |  NESTED LOOPS               |           |     1 |   107 |  1586   (2)| 0
    0:00:20 |
    |   2 |   VIEW                      | VW_NSO_1  |   100K|  1172K|  1116   (2)| 0
    0:00:14 |
    |   3 |    HASH UNIQUE              |           |     1 |  2442K|            |
            |
    |*  4 |     VIEW                    |           |   100K|  2442K|  1116   (2)| 0
    0:00:14 |
    |*  5 |      COUNT STOPKEY          |           |       |       |            |
            |
    |   6 |       VIEW                  |           |   440K|  5157K|  1116   (2)| 0
    0:00:14 |
    |   7 |        INDEX FULL SCAN      | T_CREATED |   440K|  9024K|  1116   (2)| 0
    0:00:14 |
    |   8 |   TABLE ACCESS BY USER ROWID| T         |     1 |    95 |     1   (0)| 0
    0:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("RN">100000)
       5 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
            301  consistent gets
              0  physical reads
              0  redo size
           1896  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed
    SCOTT@oracle10g>select /*+ordered use_nl(t)*/ t.owner,t.object_name   from
      2  (select rid from (
      3  select rownum rn,rid from
      4  (select rowid rid from t order by created)
      5  where rownum<100035)
      6  where rn>100000) h, t
      7  where t.rowid=h.rid;
    34 rows selected.
    Execution Plan
    Plan hash value: 3976541160
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT            |           |   100K|    11M|   101K  (1)| 0
    0:20:16 |
    |   1 |  NESTED LOOPS               |           |   100K|    11M|   101K  (1)| 0
    0:20:16 |
    |*  2 |   VIEW                      |           |   100K|  2442K|  1116   (2)| 0
    0:00:14 |
    |*  3 |    COUNT STOPKEY            |           |       |       |            |
            |
    |   4 |     VIEW                    |           |   440K|  5157K|  1116   (2)| 0
    0:00:14 |
    |   5 |      INDEX FULL SCAN        | T_CREATED |   440K|  9024K|  1116   (2)| 0
    0:00:14 |
    |   6 |   TABLE ACCESS BY USER ROWID| T         |     1 |    95 |     1   (0)| 0
    0:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("RN">100000)
       3 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
            304  consistent gets
              0  physical reads
              0  redo size
           1588  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed  

    jinyu wrote:
    Thanks for your great reply and posting ,could you tell me why subquery has the least cost here ??
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 107 | 1586 (2)| 00:00:20 |
    | 1 | NESTED LOOPS | | 1 | 107 | 1586 (2)| 00:00:20 |
    | 2 | VIEW | VW_NSO_1 | 100K| 1172K| 1116 (2)| 00:00:14 |
    | 3 | HASH UNIQUE | | 1 | 2442K| | |
    |* 4 | VIEW | | 100K| 2442K| 1116 (2)| 00:00:14 |
    |* 5 | COUNT STOPKEY | | | | | |
    | 6 | VIEW | | 440K| 5157K| 1116 (2)| 00:00:14 |
    | 7 | INDEX FULL SCAN | T_CREATED | 440K| 9024K| 1116 (2)| 00:00:14 |
    | 8 | TABLE ACCESS BY USER ROWID| T | 1 | 95 | 1 (0)| 00:00:01 |
    ----------------------------------------------------------------------------------------->
    You'll notice that as a result of a "driving" IN subquery Oracle has done a hash unique operation (line 3) on the rowids produced by the subquery. At this point the optimizer has lost all knowledge of the number of distinct values for that data column in the subquery and come back with the cardinality of one. The re-appearance of 100K as the cardinality in line 2 is an error, but I don't think the optimizer has used that value in later arithmetic.
    Given the cardinality of one, the obvious path into the T table is a nested loop.
    The same type of probelm appears when you use the table() operator in joins - you can use the cardinality() hint to try an tell Oracle how many rows the table() will produce, but that doesn't tell it how many distinct values there are in join columns - and that's an important detail when you work out the join cardinality and method).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Optimizer mode confusion

    Hello experts,
    When we set optimizer mode to first_rows_100 or first_rows etc. The fetch rows doesn't change. I am trying to understand the differences between first_rows and all_rows. It gives preference to index scan against full table scan, even the index scan is no good. And also prefers nested loop over hash join. HOWEVER, except all these, please correct me if I am wrong, I do understand that first_nows_100 only fetch 100 rows regardless default fecth row, am I wrong? What do you think about the following example in terms of CONSISTENT GETS????
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> alter system flush shared_pool;
    Sistem dei■tirildi.
    SQL> set autotrace traceonly
    SQL> select * from my_test where id < 181000;
    31000 sat²rlar² seildi.
    Y³r³tme Plan²
    Plan hash value: 1615681525
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         | 31001 |   605K|    53   (2)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| MY_TEST | 31001 |   605K|    53   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("ID"<181000)
    ¦statistikler
            454  recursive calls
              0  db block gets
           2323  consistent gets
             93  physical reads
            116  redo size
         843244  bytes sent via SQL*Net to client
          23245  bytes received via SQL*Net from client
           2068  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
          31000  rows processed
    SQL> select * from my_test where id < 181000;
    31000 sat²rlar² seildi.
    Y³r³tme Plan²
    Plan hash value: 1615681525
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         | 31001 |   605K|    53   (2)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| MY_TEST | 31001 |   605K|    53   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("ID"<181000)
    ¦statistikler
              0  recursive calls
              0  db block gets
           2235  consistent gets
              0  physical reads
              0  redo size
         843244  bytes sent via SQL*Net to client
          23245  bytes received via SQL*Net from client
           2068  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          31000  rows processed
    SQL> alter session set optimizer_mode = FIRST_ROWS_100;
    Oturum dei■tirildi.
    SQL> select * from my_test where id < 181000;
    31000 sat²rlar² seildi.
    Y³r³tme Plan²
    Plan hash value: 509756919
    | Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)
    | Time     |
    |   0 | SELECT STATEMENT            |              |   100 |  2000 |     4   (0)
    | 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| MY_TEST      |   100 |  2000 |     4   (0)
    | 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | SYS_C0011105 |       |       |     2   (0)
    | 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("ID"<181000)
    ¦statistikler
              1  recursive calls
              0  db block gets
           4402  consistent gets
              0  physical reads
              0  redo size
        1159430  bytes sent via SQL*Net to client
          23245  bytes received via SQL*Net from client
           2068  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          31000  rows processed
    Thanks in adnvance.

    the first_rows(n) hint is only about costing and instructs the optimizer to find a plan that returns the first n rows as fast as possible - though this approach may increase the total execution time for the query. This could be useful for applications that need only the top results most the time. Consider the following example:
    drop table t;
    create table t (
        col1 not null
      , col2
    as
    select mod(rownum, 100) col1
         , lpad('*', 50, '*') col2
      from dual
    connect by level <= 100000;
    exec dbms_stats.gather_table_stats(user, 'T')
    create index t_idx on t(col1);
    explain plan for
    select *
      from t
    where col1 = 1
    order by col1;
    select * from table(dbms_xplan.display);
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |  1000 | 54000 |   423   (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS FULL| T    |  1000 | 54000 |   423   (0)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=1)
    explain plan for
    select /*+ first_rows(10) */
      from t
    where col1 = 1
    order by col1;
    select * from table(dbms_xplan.display);
    | Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |       |    10 |   540 |    10   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T     |    10 |   540 |    10   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | T_IDX |       |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COL1"=1)
    Obiously the hint changes the costing massively - and for the given query the top 10 results could indeed be returned faster by the index scan (though the difference would be small for this small example).
    The details of the costing are not that simple and they are described in Randolf Geist's presentation http://www.sqltools-plusplus.org:7676/media/FIRST_ROWS_n%20presentation%20UKOUG%202009.pdf (titled: Everything you wanted to know about first_rows_n but were afraid to ask).

Maybe you are looking for

  • I can´t sync my iPod 3rd Generation with iTunes library

    Hello, I´ve been trying for a while to sync my iPod with my iTunes library on my PC. It does show up on Windows as a drive and iTunes seems to know there is a device plugged in but then a message appears: "iTunes could not connect to the iPod touch b

  • Yur, iOS 7 has broken my ipad2 wifi, how can I hav this replced? Thnk u! 3p's

    I had posted this, but directd n wrong forum, 29-Mar-2014 04:34 Your ios 7 has broken my wifi connection on my ipad. I hve a brand new router 30hrs old. My ipad 2 is not jail broken, jst ios 7 broken! Turning my router on and off could possibly break

  • [Photoshop CC (Win7 64bit)]  Adjustment Layer keeps old value when I made undo.

    Undo in Adjustment layer in Photoshop CC doesn't work well. When I made undo, the photo would go back, but "Properties Panel" doesn't go back. It still keeps the same value before undoing. Properties Panel would go correct value, after I select a ano

  • Not able to open page in R12

    Hi all, I have developed a page in 9i jdevoloper, I registerd page in oracle applications. It is working fine in 11i instance. But I am not able to run the page from R12 instance. It is showing the below error when i open page in oracle applications.

  • Score editor: how can I do this?

    Hi, I have an harp score: how could I do in order to have left and right hand connected with a beam? Thanks