Speed up "select" Query

Hello friends,
Please help me to reduce the execution time of below query………
I am going to calculate some qty’s in my function. The below query is executed for each and every row , So How can I minimize the time of execution give me some tips because I have already check the explain plan and create all possible indexes but still it took too much time ….
Please give me some tips or suggestion to speed up the query…
SELECT SUM(BALANCE_INST_TO_RECD) BALANCE_INST_TO_RECD,
SUM(Qtyissued) Qtyissued,
SUM(qtyconsumed) qtyconsumed,
SUM(nvl(Qtyissued,QTYRECD)-nvl(qtyconsumed,0)) balance_to_consume
into l_BALANCE_INST_TO_RECD,l_ISSUE_QTY,l_ISSUE_CONSUME_QTY,l_BALANCE_TO_CONSUME_QTY
FROM
(select sum(Qtyissued) Qtyissued,
sum(QtyRECD) QtyRECD,
sum(BALANCE_INST_TO_RECD) BALANCE_INST_TO_RECD,
(select NVL(sum(v.qtyissued), 0)+NVL(sum(v.qty1), 0)+NVL(sum(v.qty2), 0)
from view_itemtran v
where v.entity_code = r.entity_code
and v.tcode = '0'
and v.tnature = 'WJREC'
and v.ref1_vrno = r.vrno
and v.ref1_tcode = r.tcode
and v.item_code = r.item_code
and trunc(v.slno)<>v.slno
AND not exists (select 1 from item_mast im
where im.item_code=v.item_code
and im.item_nature in ('SC','WA'))) qtyconsumed
from
(select ib.entity_code,
ib.div_code,
ib.tcode,
ih.vrno,
ih.vrdate,
ib.item_code,
ib.other_item_code service_code,
IB.Qtyissued,
IB.QtyRECD,
to_number(lhs_prod.get_reallocate_instruction_no(ih.entity_code,ib.div_code,null,a_recd_item_code,ih.vrno,ih.acc_code,a_other_item_code,'B')) BALANCE_INST_TO_RECD
from itemtran_head ih, itemtran_body ib
where ih.entity_code = ib.entity_code
and ih.tcode = ib.tcode
and ih.vrno = ib.vrno
and ih.entity_code =a_entity_code
and ih.tcode = '0'
and ih.vrno=NVL(a_issue_vrno,ih.vrno)
and ib.div_code=a_div_code
and ih.acc_code =a_acc_code
and ib.item_code=nvl(l_inst_input_item_code,ib.item_code)
and ((l_item_nature not in ('SC','WA') and ib.item_code=l_inst_input_item_code)
OR (l_item_nature in ('SC','WA') AND IB.ITEM_CODE IN (SELECT item_code
FROM VIEW_SCRAP_LOSS_ITEM
WHERE SCRAP_ITEM_CODE=a_recd_item_code
and item_code in (select item_code from view_itemtran
where entity_code=a_entity_code
and div_code=a_div_code
and tcode='0'
and tnature='WJISS'
and vrno=NVL(a_issue_vrno,VRNO)))))
--and nvl(IB.Qtyissued,0)>0
) r
group by entity_code,div_code,vrno,tcode,vrdate,item_code);
DB Version:-==Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
Thanks

A few basic suggestions:
1) I'd take out the OR and replace it with 2 queries unioned together.
2) You say it's being called for every row. This implies another query is driving it. Can't you incorporate that query with this one so that most of the work is done in the database? If your SQL is being called in a post-query trigger on a form perhaps make a view incorporating this query and your base table.
3) You've got 2 INs here:
                                     AND IB.ITEM_CODE IN
                                         (SELECT item_code
                                            FROM VIEW_SCRAP_LOSS_ITEM
                                           WHERE     SCRAP_ITEM_CODE =
                                                        a_recd_item_code
                                                 AND item_code IN
                                                        (SELECT item_code
                                                           FROM view_itemtran
                                                          WHERE     entity_code =
                                                                       a_entity_code
                                                                AND div_code =
                                                                       a_div_code
                                                                AND tcode = '0'
                                                                AND tnature =
                                                                       'WJISS'
                                                                AND vrno =
                                                                       NVL (
                                                                          a_issue_vrno,
                                                                          VRNO)))))
Instead I'd make it EXISTS and then take those 2 queries and turn them into 1 query with view_scrap_loss_item and view_itemtran joined together.

Similar Messages

  • How to optimize the select query that is executed in a cursor for loop?

    Hi Friends,
    I have executed the code below and clocked the times for every line of the code using DBMS_PROFILER.
    CREATE OR REPLACE PROCEDURE TEST
    AS
       p_file_id              NUMBER                                   := 151;
       v_shipper_ind          ah_item.shipper_ind%TYPE;
       v_sales_reserve_ind    ah_item.special_sales_reserve_ind%TYPE;
       v_location_indicator   ah_item.exe_location_ind%TYPE;
       CURSOR activity_c
       IS
          SELECT *
            FROM ah_activity_internal
           WHERE status_id = 30
             AND file_id = p_file_id;
    BEGIN
       DBMS_PROFILER.start_profiler ('TEST');
       FOR rec IN activity_c
       LOOP
          SELECT DISTINCT shipper_ind, special_sales_reserve_ind, exe_location_ind
                     INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
                     FROM ah_item --464000 rows in this table
                    WHERE item_id_edw IN (
                             SELECT item_id_edw
                               FROM ah_item_xref --700000 rows in this table
                              WHERE item_code_cust = rec.item_code_cust
                                AND facility_num IN (
                                       SELECT facility_code
                                         FROM ah_chain_div_facility --17 rows in this table
                                        WHERE chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
                                          AND div_id = (SELECT div_id
                                                          FROM ah_div --8 rows in this table
                                                         WHERE division = rec.division)));
       END LOOP;
       DBMS_PROFILER.stop_profiler;
    EXCEPTION
       WHEN NO_DATA_FOUND
       THEN
          NULL;
       WHEN TOO_MANY_ROWS
       THEN
          NULL;
    END TEST;The SELECT query inside the cursor FOR LOOP took 773 seconds.
    I have tried using BULK COLLECT instead of cursor for loop but it did not help.
    When I took out the select query separately and executed with a sample value then it gave the results in a flash of second.
    All the tables have primary key indexes.
    Any ideas what can be done to make this code perform better?
    Thanks,
    Raj.

    As suggested I'd try merging the queries into a single SQL. You could also rewrite your IN clauses as JOINs and see if that helps, e.g.
    SELECT DISTINCT ai.shipper_ind, ai.special_sales_reserve_ind, ai.exe_location_ind
               INTO v_shipper_ind, v_sales_reserve_ind, v_location_indicator
               FROM ah_item ai, ah_item_xref aix, ah_chain_div_facility acdf, ah_div ad
              WHERE ai.item_id_edw = aix.item_id_edw
                AND aix.item_code_cust = rec.item_code_cust
                AND aix.facility_num = acdf.facility_code
                AND acdf.chain_id = ah_internal_data_pkg.get_chain_id (p_file_id)
                AND acdf.div_id = ad.div_id
                AND ad.division = rec.division;ALSO: You are calling ah_internal_data_pkg.get_chain_id (p_file_id) every time. Why not do it outside the loop and just use a variable in the inner query? That will prevent context switching and improve speed.
    Edited by: Dave Hemming on Dec 3, 2008 9:34 AM

  • Select query with secondary index

    hi,
    i have a report which is giving performance issues on a perticular select query on KONH table.
    the select query doesnt use the primary key fields and table already has around 19 million entries.So there was a secondary index created for the fields in the table.
    now, KONH is a client specific table, and hence has MANDT as the first field. when the table is not indexed it is sorted according to the order of fields, like first MANDT, then primary key fields and then remaining fields.. (correct me if i am wrong)
    but the secondary index created doesnt has MANDT in it..(yea, a mistake! )...
    but instead of correccting the secondary index, i am told to change the select query..
    so, i used a "client specific" syntax to sort the issue.. but i dont understand whre i should put the "where mandt eq sy-mandt" clause..
    should i put it right after all my secondary index fields are over? or what happens to the order of fields which are not present in the list of secondary index?
    kindaly help.
    thanx.

    Hi chinmay kulkarni,
    its better if you can ask concerned person to add MANDT field in your  index as well....
    Indexes and MANDT
    If a table begins with the mandt field, so should its indexes. If a table begins with mandt and an index doesn't, the optimizer might not use the index.
    Remember, if you will, Open SQL's automatic client handling feature. When select * from ztxlfa1 where land1 = 'US' is executed, the actual SQL sent to the database is select * from ztxlfa1 where mandt = sy-mandt and land1 = 'US'. Sy-mandt contains the current logon client. When you select rows from a table using Open SQL, the system automatically adds sy-mandt to the where clause, which causes only those rows pertaining to the current logon client to be found.
    When you create an index on a table containing mandt, therefore, you should also include mandt in the index. It should come first in the index, because it will always appear first in the generated SQL.
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    Structure of an Index
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    For Example...
    SELECT KUNNR KUNN2 INTO TABLE T_CUST_TERR
    FROM KNVP CLIENT SPECIFIED
    WHERE MANDT = SY-MANDT " here MANDT shd be first
    AND KUNN2 IN S_TERR
    AND PARVW LIKE 'Z%'.
    Accessing tables using Indexes
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    Also pls have a look on below link
    http://www.sapfans.com/sapfans/forum/devel/messages/30240.html
    Hope it will solve your problem..
    Reward points if useful...
    Thanks & Regards
    ilesh 24x7

  • Select query running long time

    Hi,
    DB version : 10g
    platform : sunos
    My select sql query running long time (more than 20hrs) .Still running .
    Is there any way to find sql query completion time approximately. (Pending time)
    Also is there any possibilities to increase the speed of sql query (already running) like adding hints.
    Please help me on this .
    Thanks

    Hi Sathish thanks for your reply,
    I have already checked in V$SESSION_LONGOPS .But it's showing TIME_REMAINING -->0
    select TOTALWORK,SOFAR,START_TIME,TIME_REMAINING from V$SESSION_LONGOPS where SID='10'
    TOTALWORK      SOFAR START_TIME      TIME_REMAINING
         1099759    1099759 27-JAN-11                    0Any idea ?
    Thanks.

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Select query on table rcv_lots_interface is always returning null

    Hi ,
    I need a help on the below issue.
    The issue is after creating PO in Oracle 11i I receive it in MSCA application.
    When we receive it at that point data gets inserted in the table " rcv_transactions_interface " and we have written a trigger on it.
    From the trigger on table " rcv_transactions_interface " we are calling a package and in the package we have select query on "rcv_lots_interface."
    But the select query is always returning null even though we are passing the correct "interface_transaction_id " and also after the "Receiving Transaction Processor" is executed i can see data in the table " RCV_LOT_TRANSACTIONS " for the same transaction.
    Below is the sample code i am using.
    CREATE OR REPLACE TRIGGER inv.RCV_TRAN_TRIGGER
    AFTER UPDATE
    ON po.rcv_transactions_interface
    FOR EACH ROW
    WHEN (NEW.processing_status_code='PENDING'
    AND NEW.destination_type_code IN ('INVENTORY','RECEIVING')
    AND NEW.mobile_txn = 'Y')
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    v_user_id NUMBER;
    v_interface_transaction_id NUMBER;
    v_organization_id NUMBER;
    v_item_id NUMBER;
    v_quantity NUMBER;
    v_resp_id NUMBER;
    v_mobile_txn VARCHAR2(5);
    v_shipment_header_id NUMBER;
    v_bill_of_lading VARCHAR2(100);
    v_group_id NUMBER;
    v_header_interface_id NUMBER;
    BEGIN
    v_interface_transaction_id := :NEW.interface_transaction_id;
    v_organization_id := :NEW.to_organization_id;
    v_user_id := :NEW.created_by;
    v_item_id := :NEW.item_id;
    v_quantity := :NEW.quantity;
    v_resp_id :=fnd_profile.VALUE('RESP_ID');
    v_transaction_type := :NEW.transaction_type;
    v_mobile_txn := :NEW.mobile_txn;
    v_bill_of_lading := :NEW.bill_of_lading;
    v_group_id := :NEW.group_id;
    v_header_interface_id := :NEW.HEADER_INTERFACE_ID;
    INV.INV_RCV_TRX_PKG.INV_RCV_LABEL_GEN_PRC( v_user_id,
    v_interface_transaction_id,
    v_item_id,
    v_quantity,
    v_organization_id,
    v_resp_id,
    v_shipment_header_id,
    v_bill_of_lading,
    v_header_interface_id,
    v_group_id);
    END;
    CREATE OR REPLACE PACKAGE BODY INV.INV_RCV_TRX_PKG
    AS
    PROCEDURE INV_RCV_LABEL_GEN_PRC( p_user_id IN NUMBER,
    p_interface_transaction_id IN NUMBER,
    p_item_id IN NUMBER,
    p_quantity IN NUMBER,
    p_organization_id IN NUMBER,
    p_resp_id IN NUMBER,
    p_shipment_header_id IN NUMBER,
    p_bill_of_lading IN VARCHAR2,
    p_header_interface_id IN NUMBER ,
    p_group_id IN NUMBER
    IS
    v_user_id NUMBER;
    v_print BOOLEAN;
    v_resp_id NUMBER;
    v_resp_appl_id NUMBER;
    v_lot_num VARCHAR2(50);
    BEGIN
    BEGIN
    SELECT application_id
    INTO v_resp_appl_id
    FROM apps.fnd_responsibility_tl frt
    WHERE responsibility_id=p_resp_id;
    END;
    apps.fnd_global.apps_initialize(p_user_id, p_resp_id, v_resp_appl_id);
    BEGIN
    SELECT rli.lot_num , rli.expiration_date
    INTO v_lot_num ,
    v_expiration_date
    FROM apps.rcv_lots_interface rli
    WHERE rli.interface_transaction_id = p_interface_transaction_id ;
    EXCEPTION
    WHEN OTHERS THEN
    v_lot_num :=NULL;
    v_expiration_date :=NULL;
    apps.fnd_file.put_line(fnd_file.log,'Exception while deriving LOT Number ######### '||p_interface_transaction_id||'------'||SQLERRM);
    END;
    END;
    Need your help to understand why the below query is always returning null and what is the solution for it ?
    SELECT rli.lot_num , rli.expiration_date
    FROM apps.rcv_lots_interface rli
    WHERE rli.interface_transaction_id = p_interface_transaction_id ;
    Thanks
    Jaydeep
    Edited by: user10454886 on Mar 25, 2013 6:31 AM
    Hi ,
    I need a solution to this issue at the earliest.
    Appreciate all of your help
    Thanks
    Jaydeep

    Centinul wrote:
    There are a lot of bugs listed in Metalink with respect to wrong results and function-based indexes.
    Here are a few:
    Bug 4028186 Wrong results if function based index exists
    Bug 4717546 Wrong results / poor plan when function based index exists
    Bug 5092688 Wrong results if function based index exists
    Based on reviewing them the workarounds range from dropping the index to setting "_disable_function_based_index" to TRUE.Facinating. It seems to me that if you use the undocumented intitialization parameter you might just as well drop the FBIs too.
    Another hazard of FBIs is the Law of Unintended Consequences. 2 years ago we tried to use one to speed up a query in a PL/SQL package. Worked OK for that purpose, but an unrelated loader on the affected table ran and rand but never finished until the FBI was dropped.

  • Issue with select query for secondary index

    Hi all,
    I have created a secondary index A on mara table with fields Mandt and Packaging Material Type VHART.
    Now i am trying to write a report
    Tables : mara.
    data : begin of itab occurs 0.
    include structure mara.
    data  : end of itab.
    *select * from mara into table itab*
    CLIENT SPECIFIED where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Unable to interpret "CLIENT". Possible causes of error: Incorrect spelling or comma error.          
    if i change to my select query     to
    *select * from mara into table itab*
      where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Without the addition "CLIENT SPECIFIED", you cannot specify the client     field "MANDT" in the WHERE condition.
    Let me know if iam wrong and we are at 4.6c
    Thanks

    Like I already said, even if you have added the mandt field in the secondary index, there is no need the use it in the select statement.
    Let me elaborate on my reply before. If you have created a UNIQUE index, which I don't think you have, then you should include CLIENT in the index. A unique index for a client-dependent table must contain the client field.
    Additional info:
    The accessing speed does not depend on whether or not an index is defined as a unique index. A unique index is simply a means of defining that certain field combinations of data records in a table are unique.
    Even if you have defined a secondary index, this does not automatically mean, that this index is used. This also depends on the database optimizer. The optimizer will determine which index is best and use it. So before transporting this index, you should make sure that the index is used. How to check this, have a look at the link:
    [check if index is used|http://help.sap.com/saphelp_nw70/helpdata/EN/cf/21eb3a446011d189700000e8322d00/content.htm]
    Edited by: Micky Oestreich on May 13, 2008 10:09 PM

  • How an INDEX of a Table got selected when a SELECT query hits the Database

    Hi All,
    How an Index got selected when a SELECT query hits the Database Table.
    My SELECT query is as ahown below.
        SELECT ebeln ebelp matnr FROM ekpo
                       APPENDING TABLE i_ebeln
                       FOR ALL ENTRIES IN i_mara_01
                       WHERE werks = p_werks      AND
                             matnr = i_mara_01-matnr AND
                             bstyp EQ 'F'         AND
                             loekz IN (' ' , 'S') AND
                             elikz = ' '          AND
                             ebeln IN s_ebeln     AND
                             pstyp IN ('0' , '3') AND
                             knttp = ' '          AND
                             ko_prctr IN r_prctr  AND
                             retpo = ''.
    The fields in the INDEX of the Table EKPO should be in the same sequence as in the WHERE clasuse?
    Regards,
    Viji

    Hi,
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    reference : help.sap.com
    thanx.

  • Regarding to perform in select query

    could any tell  the select query in this piece of code would affect the performance of the programe
    DATA: BEGIN OF OUTREC,
          BANKS LIKE BNKA-BANKS,
          BANKL LIKE BNKA-BANKL,
          BANKA LIKE BNKA-BANKA,
          PROVZ LIKE BNKA-PROVZ,   "Region (State, Province, County)
          BRNCH LIKE BNKA-BRNCH,
          STRAS LIKE BNKA-STRAS,
          ORT01 LIKE BNKA-ORT01,
          SWIFT LIKE BNKA-SWIFT,
    END OF OUTREC.
    OPEN DATASET P_OUTPUT FOR OUTPUT IN TEXT MODE.
    IF SY-SUBRC NE 0. EXIT. ENDIF.
    SELECT * FROM BNKA
             WHERE BANKS EQ P_BANKS
             AND   LOEVM NE 'X'
             AND   XPGRO NE 'X'
             ORDER BY BANKS BANKL.
      PERFORM TRANSFER_DATA.
    ENDSELECT.
    CLOSE DATASET P_OUTPUT.
    *&      Transfer the data to the output file
    FORM TRANSFER_DATA.
      OUTREC-BANKS = BNKA-BANKS.
      OUTREC-BANKL = BNKA-BANKL.
      OUTREC-BANKA = BNKA-BANKA.
      OUTREC-PROVZ = BNKA-PROVZ.
      OUTREC-BRNCH = BNKA-BRNCH.
      OUTREC-STRAS = BNKA-STRAS.
      OUTREC-ORT01 = BNKA-ORT01.
      OUTREC-SWIFT = BNKA-SWIFT.
      TRANSFER OUTREC TO P_OUTPUT.
    ENDFORM.                               " READ_IN_DATA

    Hi
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Speeding up a query on a concatenated field.

    Hi,
    I have a view with a concatenated field that basically comes down on this:
    ALTER VIEW [dbo].[_FO_QP_Items_4_0] AS
    SELECT
    Col A ,Col B --etc
    ,CONVERT(nvarchar(36), sysguid) +'|'+crdnr AS [LevCode]
    FROM table A
    As you see I am concatenating 2 fields together to get a unique [LevCode] (concatenating Itemcode + Supplier code)
    If I query that view like this, it runs rather slow on a 500.000+ rows table:
    SELECT *
    FROM [dbo].[_FO_QP_Items_4_0]
    WHERE [LevCode] = 'DF007704-38EF-4389-8544-E70CC32404D5| 60102'
    ORDER BY Col A
    Unfortunately I am not able to add an extra column to the table and concatenate the fields there. 
    Is there a way to speed up the query (or view).
    Thanks for helping me out here.

    Index view has high overhead. It has to be really justified.
    BOL: "Indexed views work best when the underlying data is infrequently updated. The maintenance of an indexed view can be greater than the cost of maintaining a table index. If the underlying data is updated frequently, the cost of maintaining the indexed
    view data may outweigh the performance benefits of using the indexed view. If the underlying data is updated periodically in batches but treated primarily as read-only between updates, consider dropping any indexed views before updating, and rebuilding them
    afterward. Doing this may improve performance of the updates."
    Reference:
    https://technet.microsoft.com/en-us/library/ms187864%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • Ways to speed up this query

    i was wondering if anyone has any ideas on how to speed this query up
    SELECT NAME, trunc (opr_date + opr_hour/24-numtodsinterval(1,'second'), 'DDD') AS opr_date, PRICE,
    Case OPR_HOUR When 0 THEN 24  ELSE OPR_HOUR END AS OPR_HOUR
    FROM ze_data.pjm_edf_lmp_hourly_int
    WHERE trunc (opr_date + opr_hour/24-numtodsinterval(1,'second'), 'DDD') between to_date ('2010/07/26', 'yyyy/mm/dd') and to_date ('2010/07/28', 'yyyy/mm/dd') and NAME in ('MISO')
    ORDER BY OPR_DATE desc, opr_hour descThe data is stored in the database with hours 0 to 23 this code take hour 0 and switches it to hour 24 of the the day before. I was wondering if there was a way to speed up the query as it takes 8 seconds to run
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0

    Since you're comparing OPR_DATE to a date range, is there any need to add the time to OPR_DATE?
    Would this work better? An index on OPR_DATE might help, but wouldn't an index on NAME be even better?
    I don't know, just a guess. You haven't posted anything we can use.
    SELECT NAME
          ,trunc (opr_date + opr_hour/24-numtodsinterval(1,'second'), 'DDD') AS opr_date
          ,PRICE
          ,Case OPR_HOUR When 0 THEN 24  ELSE OPR_HOUR END AS OPR_HOUR
    FROM   ze_data.pjm_edf_lmp_hourly_int
    WHERE  opr_date >= to_date ('2010/07/26', 'yyyy/mm/dd')
    and    opr_date <= to_date ('2010/07/28', 'yyyy/mm/dd')
    and    NAME in ('MISO')
    ORDER  BY OPR_DATE desc, opr_hour desc;

  • How to speed up a query?

    I want to select the MM generated documents from BKPF table where the awkey = MM Invoice no + FY and the awtyp = "RMRP". However since the BKPF table is very huge it takes a lot of time to execute.
    Hence i would like to know if there is any other faster way to speed this query?

    Hi ,
    Following points will help to improve performance.
    1. Creation of secondary index with the fields you are using in select query.You need to do runtime anaysis and trace analysis to view the impact of this.
    2. Try to minimize number of hits to the database table.Get the desired records from table in one go.
    3. Try to minimize the records fetched from the table.You can achive this by making some fields on screen as mandatory and using them in where condition of select statement.
    Hope this helps you.

  • Re: speeding up SPATIAL QUERY

    hi! ,
    Heres my sql statement ..
    Select ODPM2001_I From DFT.URBANAREAS U
    Where SDO_RELATE(U.GEOLOC, obTempPt, 'mask=CONTAINS querytype=WINDOW') = 'TRUE'; -- querytype=WINDOW
    I have put a spatial index on U.GEOLOC ....
    The sql statement takes too long
    I pose my challenge to you ..
    How can I speed up this query ?
    From the DBA side and/or the PL/SQL side ...
    thanks,
    Harish

    Hi,
    The first thing you should try is updating to the latest patch release. Often there are performance enhancements included.
    If you don't mind returning rectangles on whose boundary the point falls, use the anyinteract mask - it is the optimal mask to use if this situation allows it.
    Hope this helps.

  • How to speed up this query?

    I have created a demo table:
    create table demo1(d date);
    and insert some data to table:
    begin
    -- add 6000000 rows
    for i in 1..1000000 loop
    insert into demo1 values(trunc(sysdate-i));
    insert into demo1 values(trunc(sysdate-i));
    insert into demo1 values(trunc(sysdate-i));
    insert into demo1 values(trunc(sysdate-i));
    insert into demo1 values(trunc(sysdate-i));
    insert into demo1 values(trunc(sysdate-i));
    end loop;
    commit;
    end;
    The query
    select * from demo1
    where d=to_date('25.10.2004','DD.MM.YYYY')
    executed three times faster than
    select from demo1 where d=trunc(sysdate-1);
    Why? How to speed up this query if I do not want to use index?
    I have created index:
    create index demo1_indx on demo1(d);
    Execution time of queries became identical (for this volume of data).

    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> create table demo1(d date);
    Table created.
    SQL> begin
    2 for i in 1..1000000 loop
    3 insert into demo1 values(trunc(sysdate-i));
    4 insert into demo1 values(trunc(sysdate-i));
    5 insert into demo1 values(trunc(sysdate-i));
    6 insert into demo1 values(trunc(sysdate-i));
    7 insert into demo1 values(trunc(sysdate-i));
    8 insert into demo1 values(trunc(sysdate-i));
    9 insert into demo1 values(trunc(sysdate-i));
    10 insert into demo1 values(trunc(sysdate-i));
    11 end loop;
    12 commit;
    13 end;
    14 /
    PL/SQL procedure successfully completed.
    SQL> alter session set timed_statistics=true;
    Session altered.
    SQL> alter session set sql_trace=true;
    Session altered.
    SQL> set timing on;
    SQL> set autotrace on;
    SQL> select * from demo1 where d='25.10.2004';
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:10.70
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
    Bytes=1431)
    Statistics
    29 recursive calls
    1 db block gets
    28988 consistent gets
    13030 physical reads
    1035300 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d='25.10.2004';
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:03.35
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12837 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d='25.10.2004';
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:04.95
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12757 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d='25.10.2004';
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:03.82
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3285 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3285 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12752 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d=trunc(sysdate-3);
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:17.53
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
    Bytes=1431)
    Statistics
    6 recursive calls
    0 db block gets
    14503 consistent gets
    12758 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d=trunc(sysdate-3);
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:15.82
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12753 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d=trunc(sysdate-3);
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:14.56
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12758 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> select * from demo1 where d=trunc(sysdate-3);
    D
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    25.10.04
    8 rows selected.
    Elapsed: 00:00:11.84
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3696 Card=159 Byte
    s=1431)
    1 0 TABLE ACCESS (FULL) OF 'DEMO1' (TABLE) (Cost=3696 Card=159
    Bytes=1431)
    Statistics
    0 recursive calls
    0 db block gets
    14441 consistent gets
    12757 physical reads
    0 redo size
    453 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    SQL> alter session set sql_trace=false;
    Session altered.
    Elapsed: 00:00:00.00
    SQL> alter session set timed_statistics=false;
    Session altered.
    Elapsed: 00:00:00.01
    SQL>

  • Select Query Problem

    Hi Experts,
    I am having a select query in which I am using a variable in the where condition but it is giving error. Please suggest how to use variable in the select query.
    The query I am using is a s below.
    select * from zexc_rec into table it_ZEXC_REC
          where
           LIFNR in S_LIFNR and
          DOCNO in S_DOCNO and
          DOCTYP in doc_typ and
          DATE1 in S_DATE1 and
          MATNR in S_MATNR.
    Here doc_typ is a variable.
    Thanks.

    Rahul,
    use RANGES type variable instead of variable . It acts as a select-options variable. Thn use this variable in SELECT query with IN.
    Eg :
    RANGES r_t510 FOR t510-lgart.
        r_t510-low = '1600'. 
        APPEND r_t510.
        r_t510-low = '3190'. 
        APPEND r_t510.
    Note  : can be use SIGN, OPTIONS properties too in RANGES type.
    More deatils go through on HELP of RANGES
    Rgds,
    Ranjith

Maybe you are looking for

  • Issue with new customer creation in R12

    Hi Please anyone help me out how to create new customer in R12 in Receivable module. Thanks in advance.. Sandu..

  • "Download Linked File As" Does Not Work

    As others have noted, although Safari is an excellent and fast browser, there is one feature that is quite annoying, namely that all filed download by default to a single folder selected in preferences. Internet Explorer (and Firefox, I believe) by d

  • Placing Musical Notation in CS5.5

    I am running CS5.5 On the rare occasions in that past when I have needed musical notation, it was generated in another program and then scanned and placed as a graphic file in Pagemaker. Is there a better way to do this in ID?

  • Importing Flash Files

    Hello. I'm new to this forum. I tried getting answers on the Adobe Forums with no luck. I'm trying to get my Flash files saved as Quicktime movies so that I can move those to iMovie 8. Basically, I want to be able to put my animated clip into iMovie

  • HP VirtualRooms seem to connect but the room content does not display.

    When initially connecting to the HP Virtual Rooms system: http://rooms.hp.com, all looks well and I can enter a conference/meeting key and after hitting the 'go' button and I am greeted with a 'Thank you for joining us message'... but there is no act