Performance Tuning Tips

Dear All,
In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
And also I have one more query that what is " Table Statistics " , what is the use of this ...
Please give ur valueble suggestions to us in improving the performance .
Thanks in Advance !

Hi
<b>Ways of Performance Tuning</b>
1.     Selection Criteria
2.     Select Statements
•     Select Queries
•     SQL Interface
•     Aggregate Functions
•     For all Entries
Select Over more than one Internal table
<b>Selection Criteria</b>
1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
2.     Select with selection list.
<b>Points # 1/2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
<b>Select Statements   Select Queries</b>
1.     Avoid nested selects
2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
5.     Use Select Single if all primary key fields are supplied in the Where condition .
<b>Point # 1</b>
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
<b>Point # 2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
<b>Point # 3</b>
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
<b>Point # 4</b>
SELECT * FROM SBOOK INTO SBOOK_WA
  UP TO 1 ROWS
  WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
    WHERE CARRID = 'LH'.
  EXIT.
ENDSELECT.
<b>Point # 5</b>
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
<b>Select Statements           contd..  SQL Interface</b>
1.     Use column updates instead of single-row updates
to update your database tables.
2.     For all frequently used Select statements, try to use an index.
3.     Using buffered tables improves the performance considerably.
<b>Point # 1</b>
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
  SFLIGHT_WA-SEATSOCC =
    SFLIGHT_WA-SEATSOCC - 1.
  UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
       SET SEATSOCC = SEATSOCC - 1.
<b>Point # 2</b>
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE MANDT IN ( SELECT MANDT FROM T000 )
    AND CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
<b>Point # 3</b>
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
  BYPASSING BUFFER
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100  INTO T100_WA
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
<b>Select Statements       contd…           Aggregate Functions</b>
•     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
            Maxno = 0.
            Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
             Check zflight-fligh > maxno.
             Maxno = zflight-fligh.
            Endselect.
The  above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
<b>Select Statements    contd…For All Entries</b>
•     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
     The plus
•     Large amount of data
•     Mixing processing and reading of data
•     Fast internal reprocessing of data
•     Fast
     The Minus
•     Difficult to program/understand
•     Memory could be critical (use FREE or PACKAGE size)
<u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
•     Sorting the driver table
•     Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
       Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
            Select * from zfligh appending table int_fligh
            For all entries in int_cntry
            Where cntry = int_cntry-cntry.
Endif.
<b>Select Statements    contd…  Select Over more than one Internal table</b>
1.     Its better to use a views instead of nested Select statements.
2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3.     Instead of using nested Select loops it is often better to use subqueries.
<b>Point # 1</b>
SELECT * FROM DD01L INTO DD01L_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND AS4LOCAL = 'A'.
  SELECT SINGLE * FROM DD01T INTO DD01T_WA
    WHERE   DOMNAME    = DD01L_WA-DOMNAME
        AND AS4LOCAL   = 'A'
        AND AS4VERS    = DD01L_WA-AS4VERS
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO  DD01V_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT
<b>Point # 2</b>
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
<b>Point # 3</b>
SELECT * FROM SPFLI
  INTO TABLE T_SPFLI
  WHERE CITYFROM = 'FRANKFURT'
    AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
    INTO SFLIGHT_WA
    FOR ALL ENTRIES IN T_SPFLI
    WHERE SEATSOCC < F~SEATSMAX
      AND CARRID = T_SPFLI-CARRID
      AND CONNID = T_SPFLI-CONNID
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
    WHERE SEATSOCC < F~SEATSMAX
      AND EXISTS ( SELECT * FROM SPFLI
                     WHERE CARRID = F~CARRID
                       AND CONNID = F~CONNID
                       AND CITYFROM = 'FRANKFURT'
                       AND CITYTO = 'NEW YORK' )
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
<b>Internal Tables</b>
1.     Table operations should be done using explicit work areas rather than via header lines.
2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4.     A binary search using secondary index takes considerably less time.
5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
<b>Point # 2</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
<b>Point # 3</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
<b>Point # 5</b>
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
  CHECK WA-K = 'X'.
ENDLOOP.
<b>Point # 6</b>
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
<b>Point # 7</b>
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
  I = SY-TABIX MOD 2.
  IF I = 0.
    <WA>-FLAG = 'X'.
  ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
  I = SY-TABIX MOD 2.
  IF I = 0.
    WA-FLAG = 'X'.
    MODIFY ITAB FROM WA.
  ENDIF.
ENDLOOP.
<b>Point # 8</b>
LOOP AT ITAB1 INTO WA1.
  READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
  IF SY-SUBRC = 0.
    ADD: WA1-VAL1 TO WA2-VAL1,
         WA1-VAL2 TO WA2-VAL2.
    MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
  ELSE.
    INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
  ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
  COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
<b>Point # 9</b>
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 10</b>
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
  IF WA = PREV_LINE.
    DELETE ITAB.
  ELSE.
    PREV_LINE = WA.
  ENDIF.
ENDLOOP.
<b>Point # 11</b>
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
  DELETE ITAB INDEX 450.
ENDDO.
12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
13.   Specify the sort key as restrictively as possible to run the program faster.
<b>Point # 12</b>
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
<b>Internal Tables         contd…
Hashed and Sorted tables</b>
1.     For single read access hashed tables are more optimized as compared to sorted tables.
2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
<b>Point # 1</b>
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE STAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
<b>Point # 2</b>
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
<b>Reward if usefull</b>

Similar Messages

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • Can anyone plz tell me the steps for performance tuning.

    hello friends
    what is performance tuning?
    can anyone plz tell me the steps for performance tuning.

    Hi Kishore, this will help u.
    Following are the different tools provided by SAP for performance analysis of an ABAP object
    Run time analysis transaction SE30
    This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
    SQL Trace transaction ST05
    The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Need for performance tuning
    In this world of SAP programming, ABAP is the universal language. In most of the projects, the focus is on getting a team of ABAP programmers as soon as possible, handing over the technical specifications to them and asking them to churn out the ABAP programs within the “given deadlines”.
    Often due to this pressure of schedules and deliveries, the main focus of making a efficient program takes a back seat. An efficient ABAP program is one which delivers the required output to the user in a finite time as per the complexity of the program, rather than hearing the comment “I put the program to run, have my lunch and come back to check the results”.
    Leaving aside the hyperbole, a performance optimized ABAP program saves the time of the end user, thus increasing the productivity of the user, and in turn keeping the user and the management happy.
    This tutorial focuses on presenting various performance tuning tips and tricks to make the ABAP programs efficient in doing their work. This tutorial also assumes that the reader is well versed in all the concepts and syntax of ABAP programming.
    Use of selection criteria
    Instead of selecting all the data and doing the processing during the selection, it is advisable to restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code.
    Not recommended
    Select * from zflight.
    Check : zflight-airln = ‘LF’ and zflight-fligh = ‘BW222’.
    Endselect.
    Recommended
    Select * from zflight where airln = ‘LF’ and fligh = ‘222’.
    Endselect.
    One more point to be noted here is of the select *. Often this is a lazy coding practice. When a programmer gives select * even if one or two fields are to be selected, this can significantly slow the program and put unnecessary load on the entire system. When the application server sends this request to the database server, and the database server has to pass on the entire structure for each row back to the application server. This consumes both CPU and networking resources, especially for large structures.
    Thus it is advisable to select only those fields that are needed, so that the database server passes only a small amount of data back.
    Also it is advisable to avoid selecting the data fields into local variables as this also puts unnecessary load on the server. Instead attempt must be made to select the fields into an internal table.
    Use of aggregate functions
    Use the already provided aggregate functions, instead of finding out the minimum/maximum values using ABAP code.
    Not recommended
    Maxnu = 0.
    Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
    Check zflight-fligh > maxnu.
    Maxnu = zflight-fligh.
    Endselect.
    Recommended
    Select max( fligh ) from zflight into maxnu where airln = ‘LF’ and cntry = ‘IN’.
    The other aggregate functions that can be used are min (to find the minimum value), avg (to find the average of a Data interval), sum (to add up a data interval) and count (counting the lines in a data selection).
    Use of Views instead of base tables
    Many times ABAP programmers deal with base tables and nested selects. Instead it is always advisable to see whether there is any view provided by SAP on those base tables, so that the data can be filtered out directly, rather than specially coding for it.
    Not recommended
    Select * from zcntry where cntry like ‘IN%’.
    Select single * from zflight where cntry = zcntry-cntry and airln = ‘LF’.
    Endselect.
    Recommended
    Select * from zcnfl where cntry like ‘IN%’ and airln = ‘LF’.
    Endselect.
    Check this links
    http://www.sapdevelopment.co.uk/perform/performhome.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    kindly reward if found helpful.
    cheers,
    Hema.

  • IFS performance tuning

    I have oracle 9iFS setting on a windows box. It works well initially, after I batch loaded about 100,000 documents into it. It becomes quite slow. Open a folder through SMB client takes tens of seconds. Does anyone have some performance tuning tips? Or anyone has similiar situation with large amount of documents?
    Thanks

    I have oracle 9iFS setting on a windows box. It works well initially, after I batch loaded about 100,000 documents into it.It becomes quite slow. Open a folder through SMB client takes tens of seconds. Does anyone have some performance tuning tips?
    Or anyone has similiar situation with large amount of documents?
    Thanks If running analyze doesn't help the problem, please post a new thread, and we'll try to help you. When you do, please answer these questions:
    - How many documents per folder?
    - Can you detect whether your iFS Java processes or your Oracle processes are the bottleneck?
    + If the bottleneck is an 9iFS Java process, then go into the Enterprise Manager tool (configured with iFS), and bring up the Node Performance Dialog (see page 2-25 of the 9iFS Setup and Administration Guide).
    + For your server, bring up the information described in Figure 2-20 of the 9iFS Setup and Admin Guide. "Service Details: Committed Data Cache".
    - Adjust these settings higher and see if your performance improves.
    Alan

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

  • Performance tuning in Smart form

    Hi,
    How to performance tuning in Smart form level
    Any tips available kindly send me any one .
    Thanks&Regards,
    Maya

    Hi,
    If you are using a customized print program to call the Smart Form...it is better to fetch most of the data as much as possible within it and pass it useing form interface/functional module interface.
    Then in the ABAP code where ever used, fallowing the ABAP coding performance tuning options like avoiding the MOVE-CORRESPONDING, INTO CORRESPONDING, using binary search for READ statement...etc...
    Regards,
    Bharat.

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • CF Performance Tuning Help

    I moved our internal application from a P3 500mhz CPU 1U
    server to a Duo-Core Xeon 1.6ghz server.
    The newer box has a larger HD, more RAM, beefier CPU, yet my
    application is running at the same or a smidgen faster than the old
    server. Oh, I went from a Win2k, IIS 5.0 box to a Windows Server
    2003 running IIS 6, mysql 5.0.
    I did a Windows Performance Monitor look-see. I added all the
    CF services and watched the graphs do their thing. I noticed that
    after I ran query on my application, the jrun service Avg. Req Time
    is pinned at 100%. Even several minutes after inactivity on the
    box, this process is still at 100%. Anyone know if this is a
    causing my performance woes? Anyone know what else I can look at to
    increase the speed of the app (barring and serious code rebuild)?
    Thanks!
    chris

    Anyone know what else I can look at to increase the speed of
    the app
    (barring and serious code rebuild)?
    There are some tweaks and tips, but I will let more knowledge
    folks
    speak to these. But I am afraid you problem maybe a code
    issue. From
    the brief description I would be concerned there is some kind
    of memory
    leak thing happening in your code and until you track it down
    and plug
    it up, no amount of memory or performance tuning is going to
    do you much
    good.

  • Performance tuning help

    hello friends,
    i am supposed to use ST05 and SE30 to do performance tuning,and also perform cost estimation,
    can some plz help me understand, i am not that good at reading large paragraphs, so plz donot give me links to help.sap
    thank you.

    Hi
    Se30  is the runtime analysis
    Here it will give you detailed graph about your program .
    abap time,database time application time ..
    st05 is sql trace where it will give you individual select stmnts in your program
    and also proper index is being used or not.
    Other performance tips
    check any select stmnts inside the loop and replace with read statement
    use binary search in read stmnts
    use proper index in where condition in select stmnt.
    if you want to execute st05..go st05..trace on-execute ur program--come to st05..trace off and --list trace...
    you get summary details of select queires and it will be pink or other color.
    Thanks

  • Performance tuning content server

    Hi,
    I would like to profile my UCM based application. I am using Oracle content server 10g version and has custom java components for my business functionality. I would like to profile this application to identify performance bottlenecks. Is there any profiling tool in market that would get plugged /integrated with oracle content server (UCM 10g)?
    What are the general tips for performance tuning custom components?
    Thanks in advance
    Siva

    Some information can be found in the following document on Metalink :
    Use VisualVM Tools to Troubleshoot UCM and Observe Performance Problems Occurring in Java Virtual Machine (Doc ID 950621.1)

  • Performance Tuning for BAM 11G

    Hi All
    Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on Linux

    It would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
    Few key things to follow:
    1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
    2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
    3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
    the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
    Check the Dataobject tables involved in the query for missing indexes.
    4. Use batching (this is on by default for most cases)
    5. Fast network
    6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
    7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client

  • Performance tuning techniques

    I am looking to compile a list of the major performance tuning techniques that can be implemented in an ABAP program. 
    Appreciate any feedback
    J

    HI,
    chk this.
    http://www.erpgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Performance tuning for Data Selection Statement 
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of 
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the 
    length of the WHERE clause. 
    The plus
    Large amount of data 
    Mixing processing and reading of data 
    Fast internal reprocessing of data 
    Fast 
    The Minus
    Difficult to program/understand 
    Memory could be critical (use FREE or PACKAGE size) 
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table 
    Sorting the driver table 
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
    FOR ALL ENTRIES IN i_tab
      WHERE mykey >= i_tab-low and
            mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data 
    Mixing processing and reading of data 
    Easy to code - and understand 
    The minus:
    Large amount of data 
    when mixed processing isn’t needed 
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data 
    Similar to Nested selects - when the accesses are planned by the programmer 
    In some cases the fastest 
    Not so memory critical 
    The minus
    Very difficult to program/understand 
    Mixing processing and reading of data not possible 
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    The runtime analysis (SE30)
    SQL Trace (ST05)
    Tips and Tricks tool
    The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT 
    ORDER BY / GROUP BY / HAVING clause 
    Any WHERE clasuse that contains a subquery or IS NULL expression 
    JOIN s 
    A SELECT... FOR UPDATE 
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
    Regds
    Anver
    if hlped pls mark points

  • SAP Performance tuning

    What measures shall we take if it is needed to performance tune the SAP Server. It is getting too slow. It will be helpful if someone can send me some link about SAP Performance tuning, not ABAP performance tuning.
    Regards,
    Subhasish

    The SAP Servers can be slow because of many reasons:
    1. The Table Consistency: The tables should be consistent. A consistency check should be done regulalrly.Better would be using DB13 so that it can be scheduled in appropriate days.
    2. One major reason that I have found in recent times is the amount of authorizations that might be available to the users of the sytem. If all the users have a high volume of authorization objects in their user master buffer, then the response time becomes sluggish.This is true for most of the dev boxes that allow almost any amount of authorization for users and have a low RAM size.
    3. Also you might be intrested to check your network consistency. If you are accessing your servers from home with a wireless connection of 16kbps, you might want to configure your logon pad for such a crappy connection.
    Most of the BASIS performance tuning tasks such as ST04, STAT, ST02 and DB12 should be carried out too just to investigate furthur.
    Hope these were a few helpful tips.
    Peace be with everybody
    -Saurav

  • Performance Tuning - Suggestions

    Hi,
    I have an ABAP (Interactive List) Program times out in PRD very often. The ABAP run time is about 99%. The DB time is less than 1%. All the select statements has the table index in place. Actually it isprocessing all the Production Orders (Released but not Confirmed/Closed). Please let me know if you have any suggestion.
    Appreciate Your Help.
    Thanks,
    Kannan.

    Hi
    1) Dont use nested select statements
    2) If possible use for all entries in addition
    3) In the where addition make sure you give all the primary key
    4) Use Index for the selection criteria.
    5) You can also use inner joins
    6) You can try to put the data from the first select statement into an Itab and then in order to select the data from the second table use for all entries in.
    7) Use the runtime analysis SE30 and SQL Trace (ST05) to identify the performance and also to identify where the load is heavy, so that you can change the code accordingly
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d0db4c9-0e01-0010-b68f-9b1408d5f234
    ABAP performance depends upon various factors and in devicded in three parts:
    1. Database
    2. ABAP
    3. System
    Run Any program using SE30 (performance analys) to improve performance refer to tips and trics section of SE30, Always remember that ABAP perfirmance is improved when there is least load on Database.
    u can get an interactive grap in SE30 regarding this with a file.
    also if u find runtime of parts of codes then use :
    Switch on RTA Dynamically within ABAP Code
    *To turn runtim analysis on within ABAP code insert the following code
    SET RUN TIME ANALYZER ON.
    *To turn runtim analysis off within ABAP code insert the following code
    SET RUN TIME ANALYZER OFF.
    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    check the below link
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    See the following link if it's any help:
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Check also http://service.sap.com/performance
    and
    books like
    http://www.sap-press.com/product.cfm?account=&product=H951
    http://www.sap-press.com/product.cfm?account=&product=H973
    http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Performance tuning for Data Selection Statement
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
    1 Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    2 Avoid for all entries in JOINS
    3 Try to avoid joins and use FOR ALL ENTRIES.
    4 Try to restrict the joins to 1 level only ie only for 2 tables
    5 Avoid using Select *.
    6 Avoid having multiple Selects from the same table in the same object.
    7 Try to minimize the number of variables to save memory.
    8 The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    9 Avoid creation of index as far as possible
    10 Avoid operators like <>, > , < & like % in where clause conditions
    11 Avoid select/select single statements in loops.
    12 Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    13 Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    14 Avoid using ORDER BY in selects
    15 Avoid Nested Selects
    16 Avoid Nested Loops of Internal Tables
    17 Try to use FIELD SYMBOLS.
    18 Try to avoid into Corresponding Fields of
    19 Avoid using Select Distinct, Use DELETE ADJACENT.
    Regards
    Anji

Maybe you are looking for

  • Payment Date is Above the Cumulation Window

    Hi gurus - We have a situation with a Canadian employee who was converted into our system as a retiree. The only payroll results he has are NAMCs for Year End adjustments. We are testing an issue related to YE and are making NAMC entries on IT 0221 t

  • Acrobat XI Standard ramdomly crashes scanning from Fujitsu fi6770a on Win7 64 bit

    I have a user that a couple of time a day has Acrobat crash when the are scanning 8 1/12x11 2 sided B&W documents in the document feeder and says "Adobe Acrobat has stopped working".  Screen shot and dump file link attached. https://app.box.com/s/z60

  • Multiple animations in iBooks and DPS - interfering

    I have embedded multiple edge animations into iBooks and DPS. Howeve I keep running into an issue where one or more animations will not start after i have started another then move on to a different page. When i reload the book or adobe viewer then t

  • Integration Process not fired!

    Hey all, I have a scenario of using BPM from a file sender and inbound into R/3. I set up the Integration Process and imported into the directory. But when I run a sample message, XI never picks up the integration process and spits out an error that

  • I am getting green and purple dots

    I am getting green and purple dots. When I click on icon with video for file info on my 27inch IMAC G5 Quad Intel. Dots are all over the screen. When I grab a window around the screen these tiny green-purple dots and they disappear.