Performance of this query...

Hi all
will this effect the performance of the Program, and is there any alternative for this to fetch the data.
i am very much confused abt why it was used.
LOOP AT t_prd_data INTO wa_prd_data.
SELECT aaplzl bvgw03 bvge03 bbmsch
INTO (wa_plpo-aplzl, wa_plpo-vgw03, wa_plpo-vge03, wa_plpo-bmsch)
FROM afvc AS a INNER JOIN afvv AS b ON baufpl = aaufpl AND
baplzl = aaplzl
WHERE a~aufpl = wa_prd_data-aufpl AND
a~vornr = wa_prd_data-vornr.
IF sy-subrc EQ 0.
wa_plpo-aufnr = wa_prd_data-aufnr.
COLLECT wa_plpo INTO t_plpo.
ENDIF.
ENDSELECT.
ENDLOOP.
can anybody please explain this

Hi
your putting SELECT stament in side the loop and again its not a simple SELECT statment its a SELECT AND ENDSELECT
which acts like an again LOOP
so it will be a BAD performance
see thee pooints
Ways of Performance Tuning
1.     Selection Criteria
2.     Select Statements
•     Select Queries
•     SQL Interface
•     Aggregate Functions
•     For all Entries
Select Over more than one internal table
Selection Criteria
1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
2.     Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
Select Statements   Select Queries
1.     Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
SELECT * FROM SBOOK INTO SBOOK_WA
  UP TO 1 ROWS
  WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
    WHERE CARRID = 'LH'.
  EXIT.
ENDSELECT.
5.     Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1.     Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
  SFLIGHT_WA-SEATSOCC =
    SFLIGHT_WA-SEATSOCC - 1.
  UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
       SET SEATSOCC = SEATSOCC - 1.
2.     For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE MANDT IN ( SELECT MANDT FROM T000 )
    AND CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
3.     Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
  BYPASSING BUFFER
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100  INTO T100_WA
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
Select Statements  Aggregate Functions
•     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
            Maxno = 0.
            Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
             Check zflight-fligh > maxno.
             Maxno = zflight-fligh.
            Endselect.
The  above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
Select Statements  For All Entries
•     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
     The plus
•     Large amount of data
•     Mixing processing and reading of data
•     Fast internal reprocessing of data
•     Fast
     The Minus
•     Difficult to program/understand
•     Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
•     Check that data is present in the driver table
•     Sorting the driver table
•     Removing duplicates from the driver table
Consider the following piece of extract
          Loop at int_cntry.
  Select single * from zfligh into int_fligh
  where cntry = int_cntry-cntry.
  Append int_fligh.
                      Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
            Select * from zfligh appending table int_fligh
            For all entries in int_cntry
            Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1.     Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND AS4LOCAL = 'A'.
  SELECT SINGLE * FROM DD01T INTO DD01T_WA
    WHERE   DOMNAME    = DD01L_WA-DOMNAME
        AND AS4LOCAL   = 'A'
        AND AS4VERS    = DD01L_WA-AS4VERS
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO  DD01V_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
3.     Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
  INTO TABLE T_SPFLI
  WHERE CITYFROM = 'FRANKFURT'
    AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
    INTO SFLIGHT_WA
    FOR ALL ENTRIES IN T_SPFLI
    WHERE SEATSOCC < F~SEATSMAX
      AND CARRID = T_SPFLI-CARRID
      AND CONNID = T_SPFLI-CONNID
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
    WHERE SEATSOCC < F~SEATSMAX
      AND EXISTS ( SELECT * FROM SPFLI
                     WHERE CARRID = F~CARRID
                       AND CONNID = F~CONNID
                       AND CITYFROM = 'FRANKFURT'
                       AND CITYTO = 'NEW YORK' )
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1.     Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4.     A binary search using secondary index takes considerably less time.
5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
  CHECK WA-K = 'X'.
ENDLOOP.
6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
  I = SY-TABIX MOD 2.
  IF I = 0.
    <WA>-FLAG = 'X'.
  ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
  I = SY-TABIX MOD 2.
  IF I = 0.
    WA-FLAG = 'X'.
    MODIFY ITAB FROM WA.
  ENDIF.
ENDLOOP.
8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
  READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
  IF SY-SUBRC = 0.
    ADD: WA1-VAL1 TO WA2-VAL1,
         WA1-VAL2 TO WA2-VAL2.
    MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
  ELSE.
    INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
  ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
  COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
  IF WA = PREV_LINE.
    DELETE ITAB.
  ELSE.
    PREV_LINE = WA.
  ENDIF.
ENDLOOP.
11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
  DELETE ITAB INDEX 450.
ENDDO.
12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
13.   Specify the sort key as restrictively as possible to run the program faster.
“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
Internal Tables         contd…
Hashed and Sorted tables
1.     For single read access hashed tables are more optimized as compared to sorted tables.
2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE STAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.

Similar Messages

  • Please help me how to improve the performance of this query further.

    Hi All,
    Please help me how to improve the performance of this query further.
    Thanks.

    Hi,
    this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
    The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
    The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
    and we're left with nothing.
    Start by reading this blog post: Kyle Hailey &amp;raquo; Power of DISPLAY_CURSOR
    and applying this knowledge to your case.
    Best regards,
      Nikolay

  • How to performance tune this query

    I need some inputs on how to do performance tuning on this query to improve performance.
    It takes around 45 secs to run. Is it possible to make any improvements in this by putting hints or writing in another way?
    select count(*)
    as nCount from A ,
    B ,
    C
    WHERE A.COL1 = B.COL1 AND
    A.COl2 <> 'COM' AND
    B.COL2 = C.COL1 AND
    B.COl3 IS NULL AND
    B.COL4 = 'TEST'
    This is the query plan:
    Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE 1 51
    SORT AGGREGATE 1 37
    HASH JOIN 48 K 1 M 51
    TABLE ACCESS FULL A 68 K 998 K 32
    NESTED LOOPS 98 K 2 M 5
    TABLE ACCESS BY INDEX ROWID B 142 K 2 M 4
    INDEX SKIP SCAN XIF37B 142 K 6
    INDEX UNIQUE SCAN XPKC 1 5

    Mcka
    As well as EXPLAIN PLAN, let us know what proportion of rows are visited by this query. It may be that it is not using a full table scan when it should (or vice versa).
    And of course we'd need to know what indexes are available, and how selective they are for the predicated you have in this query ...
    Regards Nigel

  • Performance on this query

    I have 2 equal tables.
    COD NUMBER PK
    F1 VARCHAR2
    F2 VARCHAR2
    F3 VARCHAR2
    INSERT INTO TABLE1
    SELECT * FROM TABLE2 A WHERE NOT EXISTS
    (SELECT '1' FROM TABLE1 B WHERE
    NVL(B.F1,'x') = NVL(A.F1,'x') AND
    NVL(B.F2,'x') = NVL(A.F2,'x') AND
    NVL(B.F3,'x') = NVL(A.F3,'x') )
    The first table has 30.000 records the second has 25.000.
    I create one index on F1, F2 e F3.
    Why is this query so slow?
    Any ideas?
    Thanks
    Message was edited by:
    olivjoa

    You query as posted does a full table scan of table1 for every row in table2 because the NVL function precludes the use of an index on table1. If you really do need the NVL, and I would think carefully about whether you do, then one of these may be a better alternative:
    INSERT INTO table1
    SELECT *
    FROM table2
    WHERE (NVL(F1,'x'), NVL(F2,'x'), NVL(F3,'x')) NOT IN
               (SELECT NVL(F1,'x'), NVL(F2,'x'), NVL(F3,'x')
                FROM table1);
    INSERT INTO table1
    SELECT *
    FROM table2
    WHERE (NVL(F1,'x'), NVL(F2,'x'), NVL(F3,'x')) IN
               (SELECT NVL(F1,'x'), NVL(F2,'x'), NVL(F3,'x')
                FROM table2
                MINUS
                SELECT NVL(F1,'x'), NVL(F2,'x'), NVL(F3,'x')
                FROM table1);and don't let other people who incorrectly believe that NOT IN is inherently evil put you off. NOT IN is a perfectly good construct and is often the best (i.e. fastest, most efficient) way to do a task.
    The only thing you need to be aware of is that if there is a chance that the resultset from the select statement in the NOT IN clause may contain an entirely NULL record, then NOT IN will effectivel return no rows. Because of the NVL in your query (and again, I question that) there is no chance of that..
    SQL> SELECT * FROM t;
            ID
             1
             2
             3
             4
             5
    SQL> SELECT * FROM t1;
            ID
             1
             3
             5
    SQL> SELECT * FROM t
      2  WHERE ID NOT IN (SELECT id FROM t1);
            ID
             2
             4
    SQL> INSERT INTO t1 VALUES(NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM t
      2  WHERE id NOT IN (SELECT id FROM t);
    no rows selectedTTFN
    John

  • Improving the performance of this query

    Hi, Do you see any change we could do on this to improve its performance, pls, I appreciate you taking few minutes to help with analysing and tuining it
    select hoc.hoc_id,
    hoc.mstr_key_id,
    address.DISP_NME,
    blck.st_blck_id,
    blck.mstr_key_id as blck_mstr_key,
    bldg.BLDG_ID,
    hoc.BLDG_KEY_ID,
    address.OCPD_IND,
    address.TP_TYPE_CDE,
    '' as cmbcNumber,
    '' as tpSiteNumber,
    case
    when address.cmbc_id is not null then amsowner.ams_038_direct_connect.GetMobilierForCMB(address.cmbc_id)
    when address.lbc_id is not null and address.cmbc_id is null then amsowner.ams_038_direct_connect.GetMobilierForLBC(address.lbc_id)
    END as Mobilier,
    choice.cnsmr_chce_ind,
    hoc.SRT_IND,
    hoc.DLVRY_IND,
    case valRES_STAT_CDE
    when 123444 then hoc.cse_sprtn_wdth
    when 123555 then hoc_pend.cse_sprtn_wdth
    when 123666 then hoc_pend.cse_sprtn_wdth
    end as cse_sprtn_wdth,
    case valRES_STAT_CDE
    when 123444 then hoc.admail_color_id
    when 123555 then hoc.admail_color_id
    when 123666 then hoc_pend.admail_color_id
    end as admail_color_id,
    case valRES_STAT_CDE
    when 123444 then hoc.tieout
    when 123555 then hoc_pend.tieout
    when 123666 then hoc_pend.tieout
    end as tieout,
    hoc.callr_Ind,
    hoc.PCKUP_ind,
    case
    when hldout.hld_out_id > 0 then 33
    else 34
    end as HLDOUT_IND,
    case valRES_STAT_CDE
    when 123444 then hoc.BREAKER_CARD_NUM
    when 123555 then hoc_pend.BREAKER_CARD_NUM
    when 123666 then hoc_pend.BREAKER_CARD_NUM
    end as BREAKER_CARD_IND,
    Case
    when valRES_STAT_CDE = 123444 and hoc.MP_ID is not null and hoc.MP_ID <> 0 then 33
    when valRES_STAT_CDE = 123555 and hoc_pend.MP_ID is not null and hoc_pend.MP_ID <> 0 then 33
    when valRES_STAT_CDE = 123666 and hoc_pend.MP_ID is not null and hoc_pend.MP_ID <> 0 then 33
    else 34 END as MP_IND,
    case
    when valRES_STAT_CDE = 123444 then hoc.LCRMS_SEQ_NUM
    when valRES_STAT_CDE = 123555 then hoc_pend.DEL_SEQ
    when valRES_STAT_CDE = 123666 then hoc_pend.DEL_SEQ
    END as DEL_SEQ,
    case
    when (hoc_type_cde = 154 and hoc.MP_ID is not null ) then (NVL(mp.SLCLRES_RATE , 0)+ NVL(mp.OSSDRES_RATE,0))
    when (hoc_type_cde = 154 and hoc.MP_ID is null ) then (NVL(asmt.SLCLRES_RATE , 0)+ NVL(asmt.OSSDRES_RATE,0))
    when (hoc_type_cde = 156 and hoc.MP_ID is not null )then (NVL(mp.SLCLCOM_RATE,0) + NVL(mp.OSSDCOM_RATE,0))
    when (hoc_type_cde = 156 and hoc.MP_ID is null ) then (NVL(asmt.SLCLCOM_RATE,0) + NVL(asmt.OSSDCOM_RATE,0))
    else 0
    END As AvgMail,
    Case
    when valRES_STAT_CDE = 123444 then hoc.drct_ind
    when valRES_STAT_CDE = 123555 then hoc_pend.drct_ind
    when valRES_STAT_CDE = 123666 then hoc_pend.drct_ind
    END as drct_ind,
    Case
    when pc.ldu_type_cde in (492, 564, 565, 566, 567 ) then 33
    else 34
    end as lvr_ind,
    hoc.A12_CARD_IND,
    hoc.DNC_CARD_IND,
    hoc.CARD_IND,
    hoc.FRCE_CARD_IND,
    hoc.EXTRA_CARD_NBR,
    hoc.TTL_HOC_CNT,
    --(select BSNS_NME_EN from occupant where occupant.ADDR_MAIL_ID = address.addr_id and occupant.prmry_ind = 33 and rownum <= 1 )as PRIMARY_BUS_NME_EN,
    --(select BSNS_NME_FR from occupant where occupant.ADDR_MAIL_ID = address.addr_id and occupant.prmry_ind = 33 and rownum <= 1 )as PRIMARY_BUS_NME_FR,
    blck.blck_seq,
    address.ADDR_NUM,
    address.ADDR_SFX_CDE,
    address.ADDR_STE_NUM,
    hoc.HOC_TYPE_CDE,
    hoc.CSE_SPRTN_GRP_ID,
    pc.pc_id As pc_id,
    pc.disp_nme as pc_disp_nme,
    hoc.bag_ind,
    hoc.CASETAG,
    occupant.BSNS_NME_EN as PRIMARY_BUS_NME_EN , occupant.BSNS_NME_FR as PRIMARY_BUS_NME_FR
    from amsowner.AMS_038_HOC hoc
    left join amsowner.ams_038_hoc_pndng hoc_pend
    on hoc.MSTR_KEY_ID = hoc_pend.MSTR_KEY_ID
    inner join amsowner.AMS_038_ST_BlCK blck
    on hoc.st_blck_key_id = blck.mstr_key_id
    inner join amsowner.postal_code pc
    on blck.pc_id = pc.pc_id
    left join amsowner.AMS_038_bldg bldg
    on hoc.BLDG_KEY_ID = bldg.MSTR_KEY_ID
    inner join amsowner.address address
    on address.addr_id = hoc.addr_mail_id
    inner join amsowner.addr_lctn_to_mail locToMail
    on locToMail.addr_mail_id = address.addr_id
    inner join amsowner.addr_chce choice
    on choice.addr_id = locToMail.addr_lctn_id
    left join occupant on (occupant.ADDR_MAIL_ID = address.addr_id and occupant.prmry_ind = 33)
    left join amsowner.ams_038_hld_out hldout
    on (hldout.mstr_key_id = hoc.mstr_key_id and hldout.end_dte is null)
    left join amsowner.ams_038_dm dm
    on dm.dm_id = blck.dm_id
    left join ams_038_assmt asmt
    on blck.pc_id = asmt.pc_id and asmt.dpt_cde_nme = valDpt_cde_nme and ((asmt.case_type_cde = 1220 and dm.a62_cse_ind = 33) or (asmt.case_type_cde = 1219 and dm.a62_cse_ind = 34))
    left join ams_038_mail_prfl mp
    on hoc.mp_id = mp.mp_id and ((mp.case_type_cde = 1220 and dm.a62_cse_ind = 33) or (mp.case_type_cde = 1219 and dm.a62_cse_ind = 34))
    where hoc.mstr_key_id = valMSTR_KEY_ID and blck.DPT_CDE_NME = valDpt_cde_nme and hoc.DPT_CDE_NME = valDpt_cde_nme and blck.RSTRCTR_STAT_CDE = restCode;
    Thanks a lot :)

    Hi and welcome to the forum.
    I appreciate you taking few minutes to help with analysing and tuining itUnfortunatly it's not that simple.
    We would need some more input here, like:
    - database version
    - optimizer settings
    - execution plans
    -etc..
    Tuning is a complex matter, since many parameters come into play here.
    If you want some useful responses then see:
    [When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=3299435]
    [How to post a SQLStatement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
    to understand what information would also be very useful to us (and, if needed, have your DBA participating in this as well).

  • Utl_raw.bit_and - how can I improve performance of this query?

    Hi all
    Hugely grateful for any light anyone can shed.
    I need to do bit AND operations on 128 bit numbers and I'm stuck in Oracle 10g.
    10g provides a very nice bitand() function that can be applied to numbers but it only works up to 64 bit numbers (actually only 62 I think because it needs a couple of bits)
    So I've been looking at using the utl_raw.bit_and function. It works very well, except that I'm encountering a massive performance hit, which hopefully is in the way I'm using it, rather than intrinsic to the speed at which utl_raw.bit_and itself performs..
    With numbers, I do a query like this:
    select count(1) from offer_a where bitand(bit_column, 51432) = 51432
    With utl_raw.bit_and (and 16-byte RAW column), I am passing in a hex value (may not be the best thing? Would passing in binary be better?) and doing my query like this:
    select count(1) from offer_a where utl_raw.bit_and(bit_column,hextoraw('00000000000000000020008000002ca1')) = hextoraw('00000000000000000020008000002ca1');
    One million rows using bitand takes about a second, using utl_raw.bit_and like this takes 25 seconds or so!!! Hopefully it's something in the way I'm calling it, and there's a faster way?
    Thank you!
    Jake

    Hmm.. Actually it may not be that simple.
    I had created an index on the number column that oracle seems to consistently use if I do a query like:
    select count(1) from scott.offer_b where
    current_price >= 0 and
    bitand(current_price, 9008574719100165) = 9008574719100165
    (without the >= oracle seems to do a full table scan. The reason I created the index is that there's other columns in the data table, making larger blocks. I figured that an index, even though most of needs to be scanned, would be faster because I can cram many more rows into each block..?).
    But if I drop the index on the number column, it takes about 12 secs/1m rows, in other words about half the time of the utl_raw.bit_and().
    So it's possible that bitand on 64 bits takes about half the time of utl_raw.bit_and on 128 bits, which is very reasonable..
    So maybe the real problem I have is why oracle is not using the equivalent index that I placed on the RAW column when I do:
    select count(1) from scott.offer_b where
    hex_bit_sig >= '00000000000000000020008000002ca1' and
    utl_raw.bit_and(hex_bit_sig,'00000000000000000020008000002ca1') = '00000000000000000020008000002ca1';
    (By the way I realized that I don't need to use the hextoraw - seems like just putting a hex number in '' is the right way to specify a raw anyway?)
    So yes, I think that's the real problem - how can I get oracle to use the index I created on the RAW value

  • How to Improve Performance of this query??

    Hi experts,
    Kindly suggest me some perfomance optimization on the below code.
    SELECT * FROM vtrdi AS v
                   INTO TABLE six
                   FOR ALL ENTRIES IN r_vbeln
                   WHERE vbeln EQ  r_vbeln-low
                   AND   trsta IN  s_trsta
                   AND   vstel IN  s_vstel
                   AND   tddat IN  s_tddat
                   AND   vbtyp IN  r_vbtyp
                   AND   lstel IN  s_lstel
                   AND   route IN  s_route
                   AND   tragr IN  s_tragr
                   AND   vsbed IN  s_vsbed
                   AND   land1 IN  s_land1
                   AND   lzone IN  s_lzone
                   AND   wadat IN  s_wadat
                   AND   wbstk IN  s_wbstk
                   AND   lddat IN  s_lddat
                   AND   lfdat IN  s_lfdat
                   AND   kodat IN  s_kodat
                   AND   kunnr IN  s_kunnr
                   AND   spdnr IN  s_spdnr
                   AND   inco1 IN  s_inco1
                   AND   inco2 IN  s_inco2
                   AND   lprio IN  s_lprio
                   AND EXISTS ( SELECT * FROM likp
                                  WHERE vbeln EQ v~vbeln
                                  AND lifnr IN s_lifnr    
                                  AND lgtor IN s_lgtor
                                  AND lgnum IN s_lgnum
                                  AND lfuhr IN s_lfuhr
                                  AND aulwe IN s_aulwe
                                  AND traty IN s_traty
                                  AND traid IN s_traid
                                  AND vsart IN s_vsart
                                  AND trmtyp IN s_trmtyp
                                  AND sdabw IN s_sdabw
                                  AND cont_dg IN r_cont_dg ).
    Thanks in Advance...
    Santosh.

    Try to write 2 select
    SELECT * FROM vtrdi AS v
    INTO TABLE six
    FOR ALL ENTRIES IN r_vbeln
    WHERE vbeln EQ r_vbeln-low
    AND trsta IN s_trsta
    AND vstel IN s_vstel
    AND tddat IN s_tddat
    AND vbtyp IN r_vbtyp
    AND lstel IN s_lstel
    AND route IN s_route
    AND tragr IN s_tragr
    AND vsbed IN s_vsbed
    AND land1 IN s_land1
    AND lzone IN s_lzone
    AND wadat IN s_wadat
    AND wbstk IN s_wbstk
    AND lddat IN s_lddat
    AND lfdat IN s_lfdat
    AND kodat IN s_kodat
    AND kunnr IN s_kunnr
    AND spdnr IN s_spdnr
    AND inco1 IN s_inco1
    AND inco2 IN s_inco2
    AND lprio IN s_lprio.
    SELECT * FROM likp  into table itab
    WHERE vbeln EQ v~vbeln
    AND lifnr IN s_lifnr
    AND lgtor IN s_lgtor
    AND lgnum IN s_lgnum
    AND lfuhr IN s_lfuhr
    AND aulwe IN s_aulwe
    AND traty IN s_traty
    AND traid IN s_traid
    AND vsart IN s_vsart
    AND trmtyp IN s_trmtyp
    AND sdabw IN s_sdabw
    AND cont_dg IN r_cont_dg
    loop at six
      check whether entry is exists or not
    if not remove from six interbal table.
    endloop.
    Thanks
    Venkat

  • Performance issue with this query.

    Hi Experts,
    This query is fetching 500 records.
    SELECT
    RECIPIENT_ID ,FAX_STATUS
    FROM
    FAX_STAGE WHERE LOWER(FAX_STATUS) like 'moved to%'
    Execution Plan
    | Id  | Operation                   | Name                | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT            |                     |   159K|    10M|  2170   (1)|
    |   1 |  TABLE ACCESS BY INDEX ROWID| FAX_STAGE           |   159K|    10M|  2170   (1)|
    |   2 |   INDEX RANGE SCAN          | INDX_FAX_STATUS_RAM | 28786 |       |   123   (0)|
    Note
       - 'PLAN_TABLE' is old version
    Statistics
              1  recursive calls
              0  db block gets
             21  consistent gets
              0  physical reads
              0  redo size
            937  bytes sent via SQL*Net to client
            375  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             19  rows processed
    Total number of records in the table.
    SELECT COUNT(*) FROM FAX_STAGE--3679418
    Distinct reccords are low for this column.
    SELECT DISTINCT FAX_STATUS FROM FAX_STAGE;
    Completed
    BROKEN
    Broken - New
    moved to - America
    MOVED to - Australia
    Moved to Canada and australia
    Functional based indexe on FAX_STAGE(LOWER(FAX_STATUS))
    stats are upto date.
    Still the cost is high
    How to improve the performance of this query.
    Please help me.
    Thanks in advance.

    With no heavy activity on your fax_stage table a bitmap index might do better - see  CREATE INDEX
    I would try FTS (Full Table Scan) first as 6 vs. 3679418 is low cardinality for sure so using an index is not very helpful in this case (maybe too much Exadata oriented)
    There's a lot of web pages where you can read: full table scans are not always evil and indexes are not always good or vice versa Ask Tom &amp;quot;How to avoid the full table scan&amp;quot;
    Regards
    Etbin

  • Help in Tuning this Query

    Hi,
    I have a query in my proj where the same table is looked up twice in the same query.
    Can anybody suggest in improving the performance of this query?
    select * from table1 a1 where (a1.column1, a1.column2, a1.column3, a1.column4, a1.column5, a1.column6, a1.column7, a1.column8, a1.column9,
    a1.column10) in ( select a2.column1, a2.column2, a2.column3, a2.column4, a2.column5, a2.column6, a2.column7, a2.column8, a2.column9,
    a2.column10 from table1 a2 where column20 = '<condn>')
    The table1 used here is same in outer query as well as the sub query. this is a example of what we use here, and the table1 contains 30 million rows. Though, creating index with 10 columns can be a option, we already have a unique index with 11 columns(which includes 10 from this query) and will that be helpful in anyway? or the same existing index can be forced?
    Thanks a lot for ur time

    Depending on the selectivity of column20 I am not sure Index is the best way to go. It might perform better with two full scans and a hash-join.
    Anyway, I do prefer the syntax:
    select /*+ leading(a2) */ a1.*
    from table1 a1, table1 a2
    where a2.column20 = '<condn>'
    and a1.column1 = a2.column1
    and a1.column2 = a2.column2
    and  a1.column3 = a2.column3
    and  a1.column4 = a2.column4
    and  a1.column5 = a2.column5
    and  a1.column6 = a2.column6
    and  a1.column7 = a2.column7
    and  a1.column8 = a2.column8
    and  a1.column9 = a2.column9
    and  a1.column10 = a2.column10;I've added a leading hint to tell oracle that the start table is a2. Might be useless.
    Ensure your stats are up to date. You might need histograms here if your column20 is skewed.
    Hope this helps,
    François
    Edited by: Francois Berger on Oct 24, 2008 1:47 AM

  • How does this query work?

      UPDATE medfileinfo mf
          SET total_tap_count = total_tap_count-
                                   (SELECT   COUNT (1)
                                        FROM rating_temp rt
                                       WHERE ( ( ( (calledcallzone IS NULL) OR (callingcallzone IS NULL))  AND calltype = 0)   OR (charges_inr IS NULL))
                                       AND rt.fileid=mf.fileid
        WHERE EXISTS (SELECT   COUNT (1)
                                        FROM rating_temp rt
                                        WHERE ( ( ( (calledcallzone IS NULL) OR (callingcallzone IS NULL))  AND calltype = 0)   OR (charges_inr IS NULL))
                                       AND rt.fileid=mf.fileid)
          AND chargedamount=0;

    After the question has solved on HOW this works. I guess you have a issue with this? Performance?
    This query will tell you what it will do:
    SELECT mf.fileid,mf.total_tap_count,rt.cnt,
    mf.total_tap_count-rt.cnt new_total_tap_count
    FROM
         medfileinfo mf 
        INNER JOIN (SELECT fileid,count(*) cnt FROM rating_temp
                            WHERE  ( ( ( (calledcallzone IS NULL) OR (callingcallzone IS NULL))  AND calltype = 0)   OR (charges_inr IS NULL))
                                GROUP BY fileid ) rt
         ON rt.fileid=mf.fileid)
    WHERE chargedamount=0;And if you want to get rid of subselects use MERGE like this:
    MERGE medfileinfo target
    USING
    (SELECT mf.fileid,mf.total_tap_count,rt.cnt,
    mf.total_tap_count-rt.cnt new_total_tap_count
    FROM
         medfileinfo mf 
        INNER JOIN (SELECT fileid,count(*) cnt FROM rating_temp
                            WHERE  ( ( ( (calledcallzone IS NULL) OR (callingcallzone IS NULL))  AND calltype = 0)   OR (charges_inr IS NULL))
                                GROUP BY fileid ) rt
         ON rt.fileid=mf.fileid) source
    ON (target.fileid = source.fileid
       and target.chargedamount=0)
      WHEN MATCHED THEN UPDATE
         SET total_tap_count = source.new_total_tap_count;

  • Performance of sap query

    Hi All
    I have one SAP query which collects data from VBRK, VBUK, VBPA,, VBRP, VBFA and KONV tables.
    Some times it is taking a lot of time.
    How can I improve the performance of this query.

    hi,
    You can get by using the JOIN's in the query, please take care of these
    1, promary index and secondary index.
    2, don't join all the table, split them into different queries..bu using FOR ALL ENTRIES
    3. give where conditions as many as possible..
    4. Sort the result finally..
    hope this will help u.
    Pradeep

  • Imporving the performance of a query

    Hi,
    I have the following query in Oracle:
    SELECT distinct VECTOR_ID FROM SUMMARY_VECTOR where CASE_NAME like 'BASECASE_112_ECLIPSE100'
    "SUMMARY_VECTOR" contains approximately 120 million records or tuples. So the total time for this query is about 62 seconds
    I want to improve the performance of this query. How can I achieve this ?
    any hint ?
    Thanks

    PLAN_TABLE_OUTPUT
    Plan hash value: 3042243244
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
    |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 | 29 | 182K (3)| 00:36
    :28 |
    | 1 | SORT AGGREGATE | | 1 | 29 | |
    |
    |* 2 | TABLE ACCESS FULL| SUMMARY_VECTOR | 4323K| 119M| 182K (3)| 00:36
    :28 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter("CASE_NAME"='BASECASE_112_ECLIPSE100')
    14 rows selected.

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • How to make this query go faster

    Hi ,
    I have the following query :
    select a.* from
    tbl1 a , tbl2 b
    where a.id = b.id
    substr(b.id , 3, 2) <> 'XX'
    and b.date1 is not null
    and b.date1 >= sysdate - 90 ;
    tbl1 - 21 million rows
    tbl2 - 2 millions
    i specify this hints /*+ INDEX(idxa_1 , idxa_2) INDEX(ixdb_1)
    where idxa_1 --> b.lotid
    idxa_2 --> b.date1
    idxb_1 --> a.lotid
    IF i DO NOT include b.date1 is not null and b.date1 >= sysdate - 90 ,
    from explain plan i could see it using a FAST FULL SCAN which really returns very fast
    HOWEVER if i include b.date1 is not null and b.date1 >= sysdate - 90
    , from explain plain it uses row index and then there's a range scan and its slow
    how can i improve the performance of this query ?
    pls advise
    tks & rdgs

    Don't create the temporary table.
    Create a function based index on table two
    substr(id , 3, 2)Make sure you have gathered statistics on the tables.
    Remove the hint.
    select a.* from
    tbl1 a , tbl2 b
    where a.id = b.id
    and substr(b.id , 3, 2) <> 'XX'
    and b.date1 >= sysdate - 90;If you still have performance problems, post the formatted explain plan from sqlplus.
    Read the Performance Tuning Guide.
    Unfortunately this is not urs advice.

  • Is there anything that can be done to tune this query?

    DB version:10gR2
    From AWR report, we have determined that the following SQL is taking most CPU Time. Is there anything that we can do to improve the performance of this query.
    We have an index on (stat_code,create_date_time) columns of ext_replenish table.
    SELECT EXT_REPLENISH.EXT_REPLENISH_ID, EXT_REPLENISH.EVENT_ID, EXT_REPLENISH.EVENT_KEY, EXT_REPLENISH.WHSE, EXT_REPLENISH.VALIDATE_KEY, EXT_REPLENISH.NBR_OF_RETRY, EXT_REPLENISH.STAT_CODE, EXT_REPLENISH.ERROR_SEQ_NBR, EXT_REPLENISH.CREATE_DATE_TIME, EXT_REPLENISH.MOD_DATE_TIME, EXT_REPLENISH.USER_ID, EXT_REPLENISH.CL_MESSAGE_ID, EXT_REPLENISH.SCHEMA_ID, EXT_REPLENISH.ELS_ACTVTY_CODE, EXT_REPLENISH.CD_MASTER_ID FROM EXT_REPLENISH WHERE ( ( ( ( ( EXT_REPLENISH.STAT_CODE = :1 ) OR ( EXT_REPLENISH.STAT_CODE = :2 ) ) OR ( ( ( EXT_REPLENISH.STAT_CODE = :3 ) AND ( EXT_REPLENISH.ERROR_SEQ_NBR >= :4 ) ) AND ( EXT_REPLENISH.ERROR_SEQ_NBR < :5 ) ) ) AND ( EXT_REPLENISH.MOD_DATE_TIME <= :6 ) ) AND ( EXT_REPLENISH.NBR_OF_RETRY < :7 ) ) AND ROWNUM <= 1 ORDER BY EXT_REPLENISH.STAT_CODE ASC, EXT_REPLENISH.CREATE_DATE_TIME ASC FOR UPDATE
    Is there anyway i could tune this query?
    note: Ignore the unnecessary brackets. They are system created(apparently by hibernate)
    Message was edited by:
    Nichols
    Taking off the pre tags due to readability issue(all words appear in single line )
    Message was edited by:
    Nichols

    From just blindly looking at this particular query, there doesn't seem any obvious reason why an index couldn't be extended to cover all the columns specified in the where clause.
    It might not help too much -> explain plan is required first really before blindly guessing.
    Obviously, there's no point having an order by and a rownum (unless you wanted to do the order by before the rownum in an outer select) - I assume this is just a Hibernatism.

Maybe you are looking for

  • Display xml documents - how to submit a feature request?

    Safari is useless when it comes to render raw xml documents. You have to view the actual source to see the xml. Is there an official way to submit a feature request for safari to apple? I would love to see something similar to what firefox does with

  • External Studio Monitor??

    I read the most recent thread regarding this subject but, suppose you are equipping a studio and money is not really that much of an object, then what would you guys suggest as a large screen external studio monitor. Currently we are using a professi

  • CAN'T UNINSTALL JAVA PLUG IN 1.4.0_01

    New to Java. I just installed the new J2SDK 1.4.1-rc. I (thought) I uninstalled the previous JDK/JRE (1.4.0), but apparently I didn't. Now, the files are gone, but I can't get rid of the Java Plugin icon and, more importantly, I can't associate execu

  • Fieldname and tablename for the purchase order ( deliverd or not )

    hi,    how to find out whether the purchase order is delivered or not,    (whats table, whats fieldname) please tell its urgent     points assured

  • Java Applet (Masking/Scrollbar)

    hi ppl out there.. i've some difficulties in my project.. can u all help me? i need to create a java applet, with a scrollbar.. <-- this i know.. but how to create a image overlapping an image? what i mean here is that i have a image (A) appearing no