Too much access on a single table

Hi All
We are working on a ISU imstallation and the table DFKKOP is facing a lot of SQL queries.
We have other tables with the same fields but they do not have the records DFKKOP is having.
Is there a way to move the data from DFKKOP table to the other  tables and then access the other tables for most of the queries. We are looking for better performance and avoid sequential reads  and direct reads on the tables.
Deb

Hello,
Since the table DFKKOP is having huge records,definetly the sequential access will results in high DB time.
Have you identified which report that makes high sequential reads for certain period.
I would suggest better take SQL cache data on the table DFKKOP to see which query is having high DB access time.
and find out the SQL expensive  query that makes sequential access to the DB.
Look at how the indexes were defined in the SQL query.if FULL TABLE scans or high estimated costs,you need to recreate index  by taking attributes that used in the expensive SQL query.
If you have ready withe the ABAP code that makes the high sequential reads on this table,better talks to the ABAPer to find out the way how to optimize the code.
Thanks,
Shyam Dontamsetty

Similar Messages

  • Delta Sync taking too much time on refreshing of tables

    Hi,
    I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
    It is required that if we update the stock quantity then the stock should be updated instantaneously.
    To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
    This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
    Please could anyone suggest something so that  only those table get refreshed upon which the action is carried out.
    For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
    Not on any other scenario like the changing  status from accept to driving or any thing other than stocks.
    Thanks,
    Star
    Tags edited by: Michael Appleby

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Too much access to reader

    How do I log in to reader . It says that I tried to access too much

    Please give the exact words of the message, in full. What kind of computer do you have?

  • Sirigate: too much access from lockscreen

    hi there,
    we just "discovered" that you can access many features from out of the lockscreen with siri. with siri you can access lots of stuff from the lockscreen. just tap the home button for a while in the lockscreen and say for example "call 09876534321 " and it will call it. i used to be save from people finding/stealing my phone and try to do calls, send messages or access my calender and notes without having my unlock code. this is some serious secuity bug!! check this video... http://www.youtube.com/watch?v=1qWnNBXtrnA

    You're really talking in circles now just to defend a blatantly wrong claim.
    there shouldn't be a second security stage for preventing 3rd to access lots of information
    1. Even with Siri lock screen active, only a limited number of actions can be done with Siri without unlocking the phone - Siri will ask you to unlock to complete the task.  Pretty much the only things you can do is call by saying the number (unless the thief "knows" your contact names), SMS by #, or play a song.
    THERE IS NO "ACCESS TO LOTS OF INFORMATION" EVEN WHEN SIRI RUNS ON THE LOCK SCREEN.
    There are no "second" or "third" security stages.  There are only options shown on the same where the passcode is set.
    not being aware to be not secure if you set a password is another
    2. Anyone with at least 2 digits in their IQ and is concerned enough with security to set a passcode will see Siri switch on the passcode page when they set it.  Most people don't give a s*** which is why Apple defaulted to convenience.  Being able to voice call on a locked phone while driving is VERY important to a lot of people.

  • D3lphin / konqueror - too much free space between single files

    Hi there,
    I'm using Kdemod 3.5.9. The problem I experience is, that both d3lphin and konqueror create "gaps" between files. Its hard to explain, so I post a picture.
    The picture shows my wallpaper collection - but the problem is not bound to images. It does also occur with dirs, videos, sound-files...
    By changing the font and grid-size I was able to change the general free space between files, but the "extra space" between some files remains.
    Any Idea?
    Greetings, yodo
    Last edited by yodo (2008-05-15 19:06:25)

    @funkyou At first thanks for your work because after recognizing that you have things as at example kdmtheme included i give kdemod a try and it works nice. Instead of your theme looks very cool i stay as before with qtcurve ... i hope this is not a problem for you. [If you want you can do a +1kdemod/-1kde in your survey ]
    funkyou wrote:It works fine for me, but i belive that this has something to do with DPI settings in Xorg...
    I have exact the same settings as you with the only exception that Sub-Pixel-Hinting is on but i have the same effect as yodo. I fear that if dpi (or/and fonts) could be a problem in this case than the content of "/etc/fonts/conf.avail" could be important too. But i don't realy think that this is the problem because the icons on the desktop don't have this problem.
    My workaround is stepping to the Tree View (i hope this is the correct translation of "Mehrspaltige Ansicht") and i forgot it because i like this more and more.
    @yodo Very nice wallpapers and thanks for animating me to search InterfaceLIFT.

  • Lacie "using too much power", USB disabled, can't access ext HD?

    Last night, moving items to my Ext HD, i got an error message about the USB "using too much power" and then it disabled it.
    I'm using a 15'' 2006 Macbook with 10.6.7 operating system.
    I've googled the issue and have run out of ideas.
    The Ext HD is a 500GB LaCie Little Disk New Pack USB2.0 Black
    Restarted the macbook.
    Booted the macbook into safe mode.
    I've tried plugging something else into both my USB ports (it detected the item, a camera card reader) and was usable.
    When I plug in my Lacie Ext HD now using its usb cord, the blue light on the Lacie flashes as if its detectable/inuse but there's nothing in my Mac menus showing access to it.
    HELP ?
    Error Message:
    Because a USB device was drawing too much power from your computer, one or more of your usb devices have been disabled.
    To prevent damaging your computer, the USB device drawing too much power has been disabled. Other devices may have also been disabled. When you disconnect the device drawing too much power, your other USB devices will be enabled again.

    Maybe the disk will work through a separately powered USB Hub?
    Bob

  • R3load ttaking too much time when table REPOSRC is loaded

    Hello,
    I am installing the SAP ECC 6.0 SR2 on SUN Solaris 10 on DB2 V9.1.  17 jobs of the 19 have been completed in ABAP Import phase but it is taking too much time while doing this SAPSSEXC. It is running aroung 10 hours.  There is no error is giving. So I have canceled the SAP Installation. After that I have started through manual OS command
    /sapmnt/<SID>/exe/R3load -dbcodepage 4102 -i /<instdir>/SAPSSEXC.cmd -l /<instdir>/SAPSSEXC.log -stop_on_error -merge_bck
    It is also running around 9 hours. I do not why it is happening and when it will be completed.
    Can you help me what will check for doing this job fast or can help me how to resolve this issue?
    I have checked these SAP notes 454368 and 455195
    If i change any DB2 parameter, I have to restart the DB2 Database. What will I do? I can not understand what to do now.
    Please help me ASAP.
    Thanks
    Gautam Poddar

    Hello,
    running the R3load import step manually you might try to add the option
    -loadprocedure fast LOAD
    Pay attention to write the LOAD in capital letters!
    This will invoke the LOAD-API whenever possible.
    This should save some time on "normal" tables without LOB-columns.
    For your table REPOSRC that has a BLOB column the LOAD-API will not be taken.
    So I am sorry this will not work for your special case.
    (Thanks for the hint to Frank-Martin Haas)
    Kind regards
    Waldemar Gaida
    Edited by: Waldemar Gaida on Jan 10, 2008 8:26 AM

  • ACCTIT table Taking too much time

    Hi,
      In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
    Thanku

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • My high school aged child is spending too much time on Facebook, Tumblr to the detriment of home work.  Is there any way I can limit the access to these sites to between 8pm and 10pm?

    My high school aged child is spending too much time on Facebook and Tumblr is there any way I can limit the access time  on these sites to  from 8pm to 10pm?

    System Preferences>Parental Controls has time limits - check out this intro to Parental Controls from Cult of Mac on YouTube.
    Clinton

  • During the Unicode conversion , Cluster table export taken too much time ap

    Dear All
    during the Unicode conversion , Cluster table export taken too much time approximately 24 hours of 6 tables , could you please advise , how can  we   reduse  the time
    thanks
    Jainnedra

    Hello,
    Use latest R3load from market place.
    also refer note
    Note 1019362 - Very long run times during SPUMG scans
    Regards,
    Nitin Salunkhe

  • Is there a way to have internet access for my laptop through Verizon?  I have a smartphone, but I don't want to rack up too much data.

    Is there a way to have internet access for my laptop through Verizon?  I have a smartphone, but I don't want to rack up too much data.  How would the laptop connect to the  4G? network?  Thanks for any info:)

    Rcshnoor nailed it, everything will depend on how they manage their data connection once the laptop is connected.  It is perfectly feasible that they will have no issues with this new connection.  However, there is nothing stopping the user from going above and beyond the data usage cap currently sufficient for the phone by itself.  We cannot presume that any estimations on new usage will be accurate either.
    Users should always be aware of the reality that revolves around data usage and the devices that consume them.  Its no different than installing a 2nd waterline/hose on your house.  Sure the hose will be fine if you remember to turn it off when you are done, but no one is going to stop you from letting it run all night if you forget.

  • IMPDP take too much time at TABLE STATISTICS

    Hi experts.
    on of my database is taking taking to much time during IMPDP at TABLE STATISTICS. it take alomost 20 mints to import TABLE STATISTICS while on the other hand all other objects are imported within 5 to 7 minits.
    my database version is 10.2.0.1 and O/S Red Hat Enterprise Linux AS release 3 (Taroon Update 5)
    please guide how i can fix this issue
    thanx in Advance.
    regards

    please guide how i can fix this issueSeems to be a bug. Upgrade to 10.2.0.4.0 or 10.2.0.5.0 and check again.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Why is webpages I vist not fitting the screen. There is too much space on the left side and I cannot access my scroll bars.

    Seems the pages I visit on the net are pushed more to the right, leaving too much space on the left side. Therefore, I cannot acccess the scroll down or across bars.

    Reset the page zoom on pages that cause problems: <b>View > Zoom > Reset</b> (Ctrl+0 (zero); Cmd+0 on Mac)
    *http://kb.mozillazine.org/Zoom_text_of_web_pages
    Start Firefox in <u>[[Safe Mode]]</u> to check if one of the extensions is causing the problem (switch to the DEFAULT theme: Firefox (Tools) > Add-ons > Appearance/Themes).
    *Don't make any changes on the Safe mode start window.
    *https://support.mozilla.com/kb/Safe+Mode
    *https://support.mozilla.com/kb/Troubleshooting+extensions+and+themes

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Code  taking too much time to output

    Following  code is taking too much time to execute . (some time giving Time_out )
    ind = sy-tabix.
        SELECT SINGLE * FROM mseg INTO mseg
           WHERE bwart = '102' AND
                 lfbnr = itab-mblnr AND
                 ebeln = itab-ebeln AND
                 ebelp = itab-ebelp.
        IF sy-subrc = 0.
          DELETE itab INDEX ind.
          CONTINUE.
    Is there any other way to write this code to reduce the output time.
    Thanks

    Hi,
    I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
    Try to rewrite the code as follows:
    * Outside the loop
    SELECT *
    from MSEG
    into table lt_mseg
    for all entries in itab
    where bwart = '102' AND
    lfbnr = itab-mblnr AND
    ebeln = itab-ebeln AND
    ebelp = itab-ebelp.
    Then inside the loop, do a READ on the internal table
    Loop at itab.
    read table lt_mseg with key bwart = '102'. "plus other conditions
    if sy-subrc ne 0.
    delete itab. "index is automatically determined here from SY-TABIX
    endif.
    endloop.
    I think this should optimise performance. You can check your code's performance using SE30 or ST05.
    Hope this helps! Please revert if you need anything else!!
    Cheers,
    Shailesh.
    Always provide feedback for helpful answers!

Maybe you are looking for

  • System start error after restore

    Hi, I have restored DB2 database from tape backup and applied logs to time before crash. All processes ends successfully. But when I try to start sap system via startsap database "stands up", Java instance too, but ABAP instance ends with message: St

  • Post income crashes when "Waiting to update this Folder" appears in the outlook mailbox.

    Post income crashes when "waiting to update this folder" appears in the outlook mailbox (outlook 2007) We have already done the basic troubleshooting of remove the cached exchange mode, this does not resolve the problem. Also we have tried following

  • Oracle v.8 go slow on HP G5 Dual Core Processor

    We have a really problem !!! I have bayed a new computer HP DL380 G5 When i install Oracle 8 Server the database access is very slower. The same application works fine on HP G2 G3 and G4 can anybody help me. Thanks in advance Isidre Tapias [email pro

  • Issue in passing data from PAge A to Page B

    Hi I have a scenario where I am trying to pass data from Page A to Page B. This data has to be passed on the click of a button. On the button's event, I have set the paramaters that I want to pass. I have passed these params in a hashmap in pageConte

  • Multiple datasets

    Hi all, What is meant by multiple datasets??? regards, Pavani