Time_out Dump on this query take too long time

hi experts,
in my report a query taking too long time
pl. provide performance tips or suggestions
select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart    
       mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr    
       mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt    
       mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl    
       mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme    
       mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart    
       mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln    
       mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr    
       mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein    
       mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew    
       mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr    
       mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng    
       mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
       mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers    
       mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT    
    into table itab                                      
       from mkpf as mkpf                                 
        inner join mseg as mseg                          
                on mkpf~MBLNR = mseg~mblnr               
               and mkpf~mjahr = mseg~mjahr

no the original query is, i use where clouse with conditions.
select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart
       mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr
       mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt
       mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl
       mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme
       mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart
       mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln
       mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr
       mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein
       mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew
       mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr
       mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng
       mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
       mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers
       mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT
    into table itab
       from mkpf as mkpf
        inner join mseg as mseg
                on mkpf~MBLNR = mseg~mblnr
               and mkpf~mjahr = mseg~mjahr
    WHERE mkpf~budat IN budat
      AND mkpf~usnam IN usnam
      AND mkpf~vgart IN vgart
      AND mkpf~xblnr IN xblnr
      AND mkpf~zshift IN p_shift
      AND mseg~bwart IN bwart
      AND mseg~matnr IN matnr
      AND mseg~werks IN werks
      AND mseg~lgort IN lgort
      AND mseg~charg IN charg
      AND mseg~sobkz IN sobkz
      AND mseg~lifnr IN lifnr
      AND mseg~kunnr IN kunnr.

Similar Messages

  • Query takes a long time on EBAN table

    Hi,
    I am trying to execute a simple select statement on EBAN table. This query takes unexpectionally longer time to execute.
    Query is :
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
          banfn IN s_banfn AND
          ernam IN s_ernam
          and ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    Structure of gt_ekko_ekpo
    TYPES : BEGIN OF ty_ekko_ekpo,
            ebeln TYPE ekko-ebeln,
            ebelp TYPE ekpo-ebelp,
            bukrs TYPE ekko-bukrs,
            aedat TYPE ekko-aedat,
            lifnr TYPE ekko-lifnr,
            ekorg TYPE ekko-ekorg,
            ekgrp TYPE ekko-ekgrp,
            waers TYPE ekko-waers,
            bedat TYPE ekko-bedat,
            otb_value TYPE ekko-otb_value,
            otb_res_value TYPE ekko-otb_res_value,
            matnr TYPE ekpo-matnr,
            werks TYPE ekpo-werks,
            matkl TYPE ekpo-matkl,
            elikz TYPE ekpo-elikz,
            wepos TYPE ekpo-wepos,
            emlif TYPE ekpo-emlif,
      END OF ty_ekko_ekpo.
    Structure of GT_EBAN
    TYPES : BEGIN OF ty_eban,
      banfn TYPE eban-banfn,
      bnfpo TYPE eban-bnfpo,
      ernam TYPE eban-ernam,
      badat TYPE eban-badat,
      ebeln TYPE eban-ebeln,
      ebelp TYPE eban-ebelp,
      END OF ty_eban.
    Query seems to be OK to me. But still am not able to figure out the reason for this performance issue.
    Please provide your inputs.
    Thanks.
    Richa

    Hi Richa,
    Maybe you are executing the query with S_BANFN empty. Still based on the note 191492 you should change your query on like the following
    1st Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo.
    if sy-subrc = 0.
      delete gt_eban where ernam not in s_ernam.
    endif.
    endif.
    endif.
    endif.
    2nd Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo
         and ernam in s_ernam.
    endif.
    endif.
    endif.
    Hope this helps.
    Regards,
    R

  • My Query takes too long ...

    Hi ,
    Env   , DB 10G , O/S Linux Redhat , My DB size is about 80G
    My query takes too long ,  about 5 days to get results , can you please help to rewrite this query in a better way ,
    declare
    x number;
    y date;
    START_DATE DATE;
    MDN VARCHAR2(12);
    TOPUP VARCHAR2(50);
    begin
    for first_bundle in
    select min(date_time_of_event) date_time_of_event ,account_identifier  ,top_up_profile_name
    from bundlepur
    where account_profile='Basic'
    AND account_identifier='665004664'
    and in_service_result_indicator=0
    and network_cause_result_indicator=0
    and   DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
    group by account_identifier,top_up_profile_name
    order by date_time_of_event
    loop
    select sum(units_per_tariff_rum2) ,max(date_time_of_event)
    into x,y
    from OLD_LTE_CDR
    where account_identifier=(select first_bundle.account_identifier from dual)
    and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
    and -- no more than a month
    date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
    and -- finished his bundle then buy a new one
      date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
                             FROM OLD_LTE_CDR
                             WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
                             AND IN_SERVICE_RESULT_INDICATOR=26);
    select first_bundle.account_identifier ,first_bundle.top_up_profile_name
    ,FIRST_BUNDLE.date_time_of_event
    INTO MDN,TOPUP,START_DATE
    from dual;
    insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
    end loop;
    COMMIT;
    end;

    > where account_identifier=(select first_bundle.account_identifier from dual)
    Why are you doing this?  It's a completely unnecessary subquery.
    Just do this:
    where account_identifier = first_bundle.account_identifier
    Same for all your other FROM DUAL subqueries.  Get rid of them.
    More importantly, don't use a cursor for loop.  Just write one big INSERT statement that does what you want.

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

  • OPM process execution process parameters takes too long time to complete

    PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
    thanks in advance.
    regds,
    Shailesh

    Generally the slowest part of the process is in the extraction itself...
    Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
    Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
    In BW also check in your SM21 for database errors or delays...
    Just some ideas...

  • Parsing the query takes too much time.

    Hello.
    I hitting the bug in в Oracle XE (parsing some query takes too much time).
    A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
    Please, raise a bug for Oracle XE.
    Steps to reproduce the issue:
    1. Extract files from testcase_dump.zip and testcase_sql.zip
    2. Under username SYSTEM execute script schema.sql
    3. Import data from file TESTCASE14.DMP
    4. Under username SYSTEM execute script testcase14.sql
    SQL text can be downloaded from http://files.mail.ru/DJTTE3
    Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
    Regards,
    Viacheslav.

    Bug number? Version fix applies to?
    Relevant Note that describes the problem and points out bug/patch availability?
    With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1.

  • Query takes a long time

    Hi,
    I have a simple query that takes a long time and sometimes hangs. I want to know if there are any orders that were shipped but are not in the ra_interface_lines_all table for one reason or another. The query is
    select source_header_number from wsh_delivery_details
    where inv_interfaced_flag = 'Y'
    and released_status = 'C'
    and oe_interfaced_flag = 'Y'
    and creation_date > 'DD-MON=YYYY'
    and source_header_number not in (select interface_line_attribute1 from
    ra_interface_lines_all
    where ship_date_actual >'DD-MON-YYYY')
    Thanks
    A/A

    Follow the thread posted to generate an execution plan. Ideally you run your statement with SQL tracing enabled and run the tkprof utility on the trace file generated as outlined in the thread.
    Without further information it's just guess-work. And by the way, as already mentioned, use the TO_DATE function for DATE type literals or use the ANSI DATE literal format: DATE 'YYYY-MM-DD', otherwise you are depending on the NLS settings of the client executing the query. So:
    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Sep 23, 2008 4:08 PM
    Added DATE type caveat

  • Displaying PathGeometry into canvas take too long time

    Hi every one,
    I am working on spatial data with an application WPF and I would like to draw a map from data queries.
    I got my data and I am able to draw them on a canvas with scroll bar and zoom.
    My problem is when I want to zoom or scroll the canvas take a long time before draw the map.
    The big problem is there :
    Dim poly As New Polygon
    poly.Stroke = Brushes.Red
    poly.Fill = Brushes.Orange
    poly.StrokeThickness = 5
    Dim colPoint As New PointCollection
    For i = 1 To geo.STNumPoints
    Dim ls As New LineSegment()
    colPoint.Add(New Point(geo.STPointN(i).STX.Value - extentA, geo.STPointN(i).STY.Value - extentB))
    Next
    colPoint.Add(New Point(geo.STPointN(1).STX.Value - extentA, geo.STPointN(1).STY.Value - extentB))
    poly.Points = colPoint
    masterCanvas.Children.Add(poly)
    so the canvas is "masterCanvas" and i add each Polygon one by one. I would like if it is possible to gather all polygon into one "image" and then when it will refresh the canvas it will draw only one thing. 
    Regards.

    Hi,
    I found that you have posted it in the forum Acamar suggested.
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/e2d50b4e-0d76-4a3c-b229-521b648b54b3/displaying-pathgeometry-into-canvas-take-too-long-time?forum=wpf#e2d50b4e-0d76-4a3c-b229-521b648b54b3
    Please just focus on that thread, and you could mark if any reply which is helpful as answer.
    Thanks for your understanding.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • I don't know why it takes too long time to sample flat file.

    I don't know why it takes too long time to sample flat file.
    OWB Client 10.1
    While importing a flat file of fixed width ,
    in the screen "Flat File Sample Wizard" shows the text box number of rows with default value 200.
    I want to extend this value to 700,000.
    But, it takes too long time (over 5 hours) to sample it.
    Do you know why it is happend? or How can i fix this problem?
    Thanks in Advance.
    Regards,
    JWS.

    Hello,
    Actually flat file sampling process’ goal is to capture the structure of the file. That’s why initially the sample size is set to 200 lines.
    The question is why you are trying to perform sampling by 700000 rows? Are you expecting some change in structure beyond this mark?
    If so, and you want to capture the fact that your source file is multi – typed, your better prepare small file for sampling outside the OWB.
    Sergey

  • Sending mail take too long time exchange 2011 in iphone4

    when i try to sending mail take too long time.
    stay long time in outbox folder.
    i'm using exchange 2011 in iphone4 with latest updates.

    Hi,
    This is a characteristic of some, but not all, ISPs, and not the SMTP itself. It involves DNS. See:
    http://discussions.apple.com/thread.jspa?messageID=10391853&#10391853
    Ernie

  • Having moved my Itunes folder to external hardrive and resetting where it is in preferences Itunes can't find them and I have to direct it to every song - I have a months worth of songs on there so this will take avery long time  - any ideas on what;s up?

    Having moved my Itunes folder to a new external hardrive and resetting where it is in preferences Itunes can't find them and I have to direct it to every song- a little ! pops up next to the song and a dialogue box says Itunes can't find it so thn i have to manually search for it - I have a months worth of songs on there so this will take avery long time  - any ideas on what's up?

    nellydee wrote:
    Having moved my Itunes folder to a new external hardrive and resetting where it is in preferences
    That's not how to do it. You don't move stuff then set the iTunes prefs > Advanced.
    Once you move the /iTunes/ folder, hold Option, launch iTunes and select Choose library... and select the ITunes folder on the external.
    This will use the iTunes library file in the iTunes folder you moved.
    Don't change iTunes prefs > Advanced.

  • HT4927 This may take a long time, depending on the size of the library. How long is a long time? Mine has been rebuilding now for over 24hrs?

    "This may take a long time, depending on the size of the library". How long is a long time? Mine has been rebuilding now for over 24hrs?

    "This may take a long time, depending on the size of the library"
    How big is your library?
    OT

  • Sql Query takes too long to enter into the first line

    Hi Friends,
      I am using SQLServer 2008. I am running the query for fetching the data from database. when i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second) to enter into first
    line of the stored procedure. After its enter into the first statement of the SP, its fetching the data within a second. I think there is no problem with Sqlquery.  Kindly let me know if you know the reason behind this.
    Sample Example:
    Create Sp Sp_Name
    as
     Begin
     print Getdate()
      Sql statements for fetching datas
     Print Getdate()
     End
    In the above example, there is no difference between first date and second date.
    Please help me to trouble shooting this problem.
    Thanks & Regards,
    Rajkumar.R

     i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second)
    Additional to Manoj: A
    DBCC FREEPROCCACHE clears the procedure cache, so all store procedure must be newly compilied on the first call.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • This report takes a long time

    Hello:
    I have this report and to run it takes a long time, how can I optimize it?
    thank you very much.
    ======= THIS IS MY REPORTS =======
    { NAMEWIDTH 15 }
    { WIDTH 15 }
    { SUPFEED }
    <SUPSHARE
    {DECIMAL 2}
    { NOINDENTGEN }
    { SUPHEADING }
    {ROWREPEAT}
    <SPARSE
    { SUPBRACKETS }
    <COLUMN (Accounts,Scenarios)
    "Mov_Ppto"
    Real
    &Esc_Rep
    <PAGE("Time Periods", Years, Moneda, Periodicidad, Producto, Versions)
    Producto
    &Mes_Rep
    &Year_Rep
    Moneda
    Mensual
    Final
    <RESTRICT(@DATACOL(1) = #Missing OR @DATACOL(2) = #Missing)
    <ROW (Cliente, Ejecutivos,"Jefe de Grupo",Entities)
    <LINK (<LEV("Cliente",0) AND <DESCENDANTS("CLIENTES BE"))
    <LINK (<GEN("Ejecutivos",3))
    <LINK (<GEN("Jefe de Grupo",3))
    <LINK (<GEN("Entities",5) AND NOT <DESCENDANTS(U4728))
    ====== THIS IS THE INFORMATION OF DATABASE ========
    Name : Plan1
    Application Name : BEMP_DES
    Database Type : NORMAL
    Status : Loaded
    Elapsed Db Time : 00:01:07:41
    Users Connected : 2
    Blocks Locked : 0
    Dimensions : 12
    Data Status : Data has not been modified
    since last calculation.
    Data File Cache Size Setting : 0
    Current Data File Cache Size : 0
    Data Cache Size Setting : 268393440
    Current Data Cache Size : 2086560
    Index Cache Size Setting : 307200000
    Current Index Cache Size : 307200000
    Index Page Size Setting : 8192
    Current Index Page Size : 8192
    Cache Memory Locking : Disabled
    Database State : Read-write
    Data Compression on Disk : Yes
    Data Compression Type : BitMap Compression
    Retrieval Buffer Size (in K) : 90
    Retrieval Sort Buffer Size (in K) : 90
    Isolation Level : Uncommitted Access
    Pre Image Access : No
    Time Out : Never
    Number of blocks modified before internal commit : 3000
    Number of rows to data load before internal commit : 0
    Number of disk volume definitions : 1
    1) Vol: f, Size: Unlimited, File Type: 3, Size: 2097152K
    Currency Country Dimension Member : Entities
    Currency Time Dimension Member : Time Periods
    Currency Category Dimension Member : Accounts
    Currency Type Dimension Member :
    Currency Partition Member :
    Request Info
    Request Type : Data Load
    User Name : admin
    Start Time : Fri Apr 18 17:45:05 2008
    End Time : Fri Apr 18 17:46:05 2008
    Request Type : Customized Calculation
    User Name : admin
    Start Time : Tue Apr 29 18:32:05 2008
    End Time : Wed Apr 30 04:14:37 2008
    Request Type : Outline Update
    User Name : admin
    Start Time : Sun Apr 06 16:24:58 2008
    End Time : Sun Apr 06 17:50:46 2008
    Description:
    Allow Database to Start : Yes
    Start Database when Application Starts : Yes
    Access Level : None
    Data File Cache Size : 33554432
    Data Cache Size : 268435456
    Aggregate Missing Values : No
    Perform two pass calc when [CALC ALL;] : Yes
    Create blocks on equation : Yes
    Currency DB Name : N/A
    Currency Conversion Type Member : N/A
    Currency Conversion Type : N/A
    Index Cache Size : 307200000
    Index Page Size : 8192
    Cache Memory Locking : Disabled
    Data Compression on Disk : Yes
    Data Compression Type : BitMap Compression
    Retrieval Buffer Size (in K) : 90
    Retrieval Sort Buffer Size (in K) : 90
    Isolation Level : Uncommitted Access
    Pre Image Access : Yes
    Time Out after : 20 sec.
    Number of blocks modified before internal commit : 3000
    Number of rows to data load before internal commit : 0
    Number of disk volume definitions : 1
    1) Vol: f, Size: Unlimited, File Type: 3, Size: 2097152K
    I/O Access Mode (pending) : Buffered
    I/O Access Mode (in use) : Buffered
    Direct I/O Type (in use) : N/A

    Hi
    Hope you have resolved this already, otherwise, generating cost collectors would mean you are allowing cots to be posted into RE-FX using Settlement units. If there are no costs to be posted with an SU, then you do not need to run RECSSS.
    Regards
    MK

Maybe you are looking for

  • Unable to access Calender using Groups (as Manager) using PC

    I own a mac, but I'm doing the following on a friends PC. (That might be important to know or not). I went to my mac.com account and created a group. First time. When I was in my group, i clicked on the calendar area thinking it would open up and I c

  • HT2280 apple tv not displaying in devices

    have a apple tv 1st gen and is not diplaying in devices, have done factory reset and still not on. please can any one help

  • File Manager slow and fails

    STA100-3 10.2.1.2991 I notice that sometimes the File Manager is slow or does seem to work. In particular, SMS Backup app (which has worked for over  a year) has started to fail when it calls the File Manager. The app developer doesn't see this, but

  • Interior decorating with an iPad 2?

    I have not bought an iPad 2 but the answer to this question would help me decide: If I take a picture of the interior of my house, could I then in some way color/paint over certain areas to see how they look in a different color? For example, I have

  • Error in Quantity/Value Determination  : VPRS (IV)

    Gurus, When I am creating an IV (Intercompany Invoice), I am getting an error in VPRS determination in pricing. VPRS is not getting determined and it throws up an error saying "Error in Quantity/Value determination" (Error number 217) Need you help o