Performance Tuning - Suggestions

Hi,
I have an ABAP (Interactive List) Program times out in PRD very often. The ABAP run time is about 99%. The DB time is less than 1%. All the select statements has the table index in place. Actually it isprocessing all the Production Orders (Released but not Confirmed/Closed). Please let me know if you have any suggestion.
Appreciate Your Help.
Thanks,
Kannan.

Hi
1) Dont use nested select statements
2) If possible use for all entries in addition
3) In the where addition make sure you give all the primary key
4) Use Index for the selection criteria.
5) You can also use inner joins
6) You can try to put the data from the first select statement into an Itab and then in order to select the data from the second table use for all entries in.
7) Use the runtime analysis SE30 and SQL Trace (ST05) to identify the performance and also to identify where the load is heavy, so that you can change the code accordingly
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d0db4c9-0e01-0010-b68f-9b1408d5f234
ABAP performance depends upon various factors and in devicded in three parts:
1. Database
2. ABAP
3. System
Run Any program using SE30 (performance analys) to improve performance refer to tips and trics section of SE30, Always remember that ABAP perfirmance is improved when there is least load on Database.
u can get an interactive grap in SE30 regarding this with a file.
also if u find runtime of parts of codes then use :
Switch on RTA Dynamically within ABAP Code
*To turn runtim analysis on within ABAP code insert the following code
SET RUN TIME ANALYZER ON.
*To turn runtim analysis off within ABAP code insert the following code
SET RUN TIME ANALYZER OFF.
Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
Avoid for all entries in JOINS
Try to avoid joins and use FOR ALL ENTRIES.
Try to restrict the joins to 1 level only ie only for tables
Avoid using Select *.
Avoid having multiple Selects from the same table in the same object.
Try to minimize the number of variables to save memory.
The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
Avoid creation of index as far as possible
Avoid operators like <>, > , < & like % in where clause conditions
Avoid select/select single statements in loops.
Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
Avoid using ORDER BY in selects
Avoid Nested Selects
Avoid Nested Loops of Internal Tables
Try to use FIELD SYMBOLS.
Try to avoid into Corresponding Fields of
Avoid using Select Distinct, Use DELETE ADJACENT
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Check also http://service.sap.com/performance
and
books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
1 Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
2 Avoid for all entries in JOINS
3 Try to avoid joins and use FOR ALL ENTRIES.
4 Try to restrict the joins to 1 level only ie only for 2 tables
5 Avoid using Select *.
6 Avoid having multiple Selects from the same table in the same object.
7 Try to minimize the number of variables to save memory.
8 The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
9 Avoid creation of index as far as possible
10 Avoid operators like <>, > , < & like % in where clause conditions
11 Avoid select/select single statements in loops.
12 Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
13 Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
14 Avoid using ORDER BY in selects
15 Avoid Nested Selects
16 Avoid Nested Loops of Internal Tables
17 Try to use FIELD SYMBOLS.
18 Try to avoid into Corresponding Fields of
19 Avoid using Select Distinct, Use DELETE ADJACENT.
Regards
Anji

Similar Messages

  • IIR 2.8.0.7 performance tuning suggestions / documents for Oracle 11?

    Would we have any hints or white papers that would support a customer in IIR matching server tuning for initial load performance,
    beyond the Siebel specific
    Performance Tuning Guidelines for Siebel CRM Application on Oracle Database (Doc ID 781927.1)
    which does NOT generate any statistics on the Informatica Schema?
    Customer is starting production data load into Siebel UCM of over 5 million customer records . Their current bottle neck seems to be IIR queries and high IIR host resources usage.
    This would be for Siebel 8.1.1.4 [21225] on Oracle 11.1.0.6; I currently do not know if ACR 475 with its EBC access is used or not; I'd be looking for any performance tuning suggestions on the Informatica & database side - I have not found anything helpful in the informatica knowledgebase and in the 2.8.0.7 docs the only performance tuning suggestions seem to be on IDT loads.
    Obviously performance can be influenced by the matching fields used, but this customer will be looking for performance tuning suggestions keeping the matching fields and thresholds they got approved from the business side.
    Any documents we could share?

    Hi Varad,
    Well, Oracle Metalink actually got it fixed for me! Got to give them credit.
    I ran the 1st bit of SQL they sent me and it fixed it so I think it was only cosmetic. I've enclosed all of their response in case others may find themselves in similar situation.
    Thanks again for YOUR help too.
    Perry
    ==========================================================
    1. Do you see any invalid objects in the database?
    2. Please update the SR with the output of running the commands below
    1.Connect as SYS.
    set serveroutput on (This will let you see if any check fails)
    exec validate_apex; (If your version is 2.0 or above)
    If all are fine, the registry should be changed to VALID for APEX and you should see the following
    message:
    PL/SQL procedure successfully completed.
    Note : The procedure validate_apex should always complete successfully.
    With serveroutput on, it should show why Application Express is listed as invalid.It should show output like:
    FAILED CHECK FOR <<obect_name>>
    Or
    FAILED EXISTENCE CHECK FOR <<object_name>>

  • Performance tuning in PL/SQL code

    Hi,
    I am working on already existing PL/SQL code which is written by someone else on validation and conversion of data from a temporary table to base table. It usually has 3.5 million rows. and the procedure takes arount 2.5 - 3 hrs to complete.
    Can I enhance the PL/SQL code for better performance ? or, is this OK to take so long to process these many rows?
    Thanks!
    Yogini

    Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
    or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
    But please provide some more details like your database version etc.
    I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
    SQL> alter session set events '10046 trace name context forever, level 12';
    SQL> execute your PL/SQL code here
    SQL> exitWill give you a .trc file in your udump directory on the server.
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    Also this informative thread can give you more ideas:
    HOW TO: Post a SQL statement tuning request - template posting
    as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
    and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29

  • Performance tuning in t

    hi,
    I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an  ALV report.
    please suggest me how to proced to change the code.

    Performance tuning for Data Selection Statement
    <b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

  • Step for performance tuning in oracle 10g

    hi,
    i want to know the step of persformance tuning and sql tuning.

    I'd suggest you to refer to documentation: [Oracle Database 2 Day + Performance Tuning Guide 10g Release 2 (10.2)|http://download.oracle.com/docs/cd/B19306_01/server.102/b28051/toc.htm]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • Performance Tuning for ECC 6.0

    Hi All,
      I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
      Can you please guide me how to do these steps in the sytem
    1) Reorganization should be done at least for the top 10 huge tables
    and their indexes
    2) Unaccessed data can be taken out by SAP Archiving
    3)Apply the relevant corrections for top sap standard objects
    4) CBO update statistics must be latest for all SAP and customer objects
    I have never done performance tuning and want to do it on this system.
    Regards,
    Jitender

    Hi,
    Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
    I require your inputs for performance tuning
    CPU
    Utilization  user    %                   3     Count                                    2
                  system  %                   3      Load average  1min                0.11
                  idle    %                       1                         5 min                0.21
                  io wait %                    93                       15 min                0.22
    System calls/s                        982  Context switches/s                    1752
    Interrupts/s                          4528
    Memory
    Physical mem avail  Kb             6291456 Physical mem free   Kb               93992
    Pages in/s                             473 Kb paged in/s                         3784
    Pages out/s                            211 Kb paged out/s                        1688
    Pool
    Configured swap     Kb            26869896 Maximum swap-space  Kb            26869896
    Free in swap-space  Kb            21631032 Actual swap-space   Kb            26869896
    Disk with highest response time
    Name                                   md3 Response time       ms                  51
    Utilization                              2     Queue                                    0
    Avg wait time       ms              0    Avg service time    ms                  51
    Kb transfered/s                       2   Operations/s                             0
    Current parameters in the system
    System:        sapretail_RET_01          Profile Parameters for SAP Buffers
    Date and Time: 08.01.2009       13:27:54
    Buffer Name                    Comment
    Profile Parameter             Value      Unit  Comment
    Program buffer                 PXA
    abap/buffersize               450000     kB    Size of program buffer
    abap/pxa                      shared           Program buffer mode
    |
    CUA buffer                     CUA
    rsdb/cua/buffersize           3000       kB    Size of CUA buffer
    The number of max. buffered CUA objects is always: size / (2 kB)
                                                                                    |
    Screen buffer                  PRES
    zcsa/presentation_buffer_area 4400000    Byte  Size of screen buffer
    sap/bufdir_entries            2000             Max. number of buffered screens
    |
    Generic key table buffer       TABL
    zcsa/table_buffer_area        30000000   Byte  Size of generic key table buffer
    zcsa/db_max_buftab            5000             Max. number of buffered objects
    |
    Single record table buffer     TABLP
    rtbb/buffer_length            10000      kB    Size of single record table buffer
    rtbb/max_tables               500              Max. number of buffered tables
    |
    Export/import buffer           EIBUF
    rsdb/obj/buffersize           4096       kB    Size of export/import buffer
    rsdb/obj/max_objects          2000             Max. number of objects in the buffer
    rsdb/obj/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/obj/mutex_n              0                Number of mutexes in Export/Import buffer
    |
    OTR buffer                     OTR
    rsdb/otr/buffersize_kb        4096       kB    Size of OTR buffer
    rsdb/otr/max_objects          2000             Max. number of objects in the buffer
    rsdb/otr/mutex_n              0                Number of mutexes in OTR buffer
    |
    Exp/Imp SHM buffer             ESM
    rsdb/esm/buffersize_kb        4096       kB    Size of exp/imp SHM buffer
    rsdb/esm/max_objects          2000             Max. number of objects in the buffer
    rsdb/esm/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/esm/mutex_n              0                Number of mutexes in Exp/Imp SHM buffer
    |
    Table definition buffer        TTAB
    rsdb/ntab/entrycount          20000            Max. number of table definitions buffered
    The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
                                                                                    |
    Field description buffer       FTAB
    rsdb/ntab/ftabsize            30000      kB    Size of field description buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of table descriptions buffered
    FTAB needs about 700 bytes per used entry
                                                                                    |
    Initial record buffer          IRBD
    rsdb/ntab/irbdsize            6000       kB    Size of initial record buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of initial records buffered
    IRBD needs about 300 bytes per used entry
                                                                                    |
    Short nametab (NTAB)           SNTAB
    rsdb/ntab/sntabsize           3000       kB    Size of short nametab
    rsdb/ntab/entrycount          20000            Max. number / 2 of entries buffered
    SNTAB needs about 150 bytes per used entry
                                                                                    |
    Calendar buffer                CALE
    zcsa/calendar_area            500000     Byte  Size of calendar buffer
    zcsa/calendar_ids             200              Max. number of directory entries
    |
    Roll, extended and heap memory EXTM
    ztta/roll_area                3000000    Byte  Roll area per workprocess (total)
    ztta/roll_first               1          Byte  First amount of roll area used in a dialog WP
    ztta/short_area               3200000    Byte  Short area per workprocess
    rdisp/ROLL_SHM                16384      8 kB  Part of roll file in shared memory
    rdisp/PG_SHM                  8192       8 kB  Part of paging file in shared memory
    rdisp/PG_LOCAL                150        8 kB  Paging buffer per workprocess
    em/initial_size_MB            4092       MB    Initial size of extended memory
    em/blocksize_KB               4096       kB    Size of one extended memory block
    em/address_space_MB           4092       MB    Address space reserved for ext. mem. (NT only)
    ztta/roll_extension           2000000000 Byte  Max. extended mem. per session (external mode)
    abap/heap_area_dia            2000000000 Byte  Max. heap memory for dialog workprocesses
    abap/heap_area_nondia         2000000000 Byte  Max. heap memory for non-dialog workprocesses
    abap/heap_area_total          2000000000 Byte  Max. usable heap memory
    abap/heaplimit                40000000   Byte  Workprocess restart limit of heap memory
    abap/use_paging               0                Paging for flat tables used (1) or not (0)
    |
    Statistic parameters
    rsdb/staton                   1                Statistic turned on (1) or off (0)
    rsdb/stattime                 0                Times for statistic turned on (1) or off (0)
    Regards,
    Jitender

  • LDAP Performance Tuning In Large Deployments - dir_chkcredentialsonreadonly parameter

    Calendar users are experiencing long delays in logging in or updating a meeting
    with many attendees or dates. This is especially notificeable after migrating
    from calendar server 1.0x to calendar server 3.x.
    <P>
    At this time, calendar performance can be improved by up to 30% by reconfiguring
    the calendar server to bind to it's directory server either anonymously or with
    a specific user. The default is to bind as the user requesting directory
    information.
    <P>
    There is a parameter that can be added in the server configuration file by
    editing the /users/unison/misc/unison.ini file. For anonymous binds
    adding the dir_chkcredentialsonreadonly line to the DAS section:
    <P>
    [DAS]
    dir_updcalonly = TRUE
    dir_connection = persistent
    dir_service = LDAP,v2,NSCP,1
    enable = TRUE
    dir_chkcredentialsonreadonly = FALSE
    <P>
    For binding as a specific user, also add the following to the LDAP,..,...,,
    section:
    <P>
    [LDAP...]
    binddn = dn
    bindpwd = password
    <P>
    Going forward, we are working on other changes in the next versions of the
    Calendar Client and Calendar Server to improve performance. Please check with
    your Netscape Sales contact for announcements on the availability of these
    versions.
    <P>

    Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
    But, could you please clarify 2 points to me
    1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
    In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
    2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
    Best reagards

  • LDAP Performance Tuning In Large Deployments - numconnect parameter

    LDAP Performance Tuning In Large Deployments - numconnect parameter
    <p>
    Tuning the LDAP connections
    (numconnect parameter)
    This parameter translates directly into the number of unidas processes that will
    be launched when Calendar Server is started. A process takes time to load, uses
    RAM, and when active, CPU cycles. And, unidas maintains an LDAP client
    connection to a Directory Server which can only support a fixed number of these
    connections. Since a calendar client does not require constant directory access
    then having a matching number of unidas processes (to match uniengd "client"
    processes) is not a good configuration.
    Basically, a calendar client will make many requests for LDAP information, even
    if the event information being retrieved is not currently view able. For example,
    if the calendar client is displaying a week view with 20 events and each event
    has 5 attendees, that will translate into at least 100 separate ldap search
    requests for the given name and surname of each attendee. What this means is
    that an "active" calendar user will require the services of a calendar server
    unidas connection quite often.
    Recommendation is that you increase the number of unidas connections
    to match the number of "active" calendar users. Our experience is that
    at least 20% of the number of configured users (lck_users from the
    /users/unison/misc/unison.ini file) are actually logged in, and 10% of
    those calendar users are active. For example, if have 3000 configured
    calendar users, 600 configured are logged in and 10% of the logged in
    are active, which would translate into at least 60 unidas connections.
    Keep in mind that configured vs logged in vs active might be different at each
    customer site, so please adjust your number of unidas connections
    accordingly. To set this up, edit the /users/unison/log/unison.ini file and add
    the numconnect parameter to the section noted (where "hostname" is the name of
    your local host):
    [LCK]
    lck_users = 600
    [hostname,unidas]
    numconnect = 60
    The calendar server will need to be restarted after making changes
    to the /users/unison/log/unison.ini file, before those changes will
    take effect.
    Note: Due to some architectural changes in the Calendar Server 4.x, the total
    number of DAS connections should never be set higher than 250.
    Recommendations for num_connect would be a maximum of 5% of logged on users.
    However, keep in mind that 250 das connections is a very high number.
    Example:
    [LCK]
    lck_users = 5000
    [hostname,unidas]
    numconnect = 250

    Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
    But, could you please clarify 2 points to me
    1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
    In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
    2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
    Best reagards

  • Performance tuning for Sales Order and its configuration data extraction

    I write here the data fetching subroutine of an extract report.
    This report takes 2.5 hours to extract 36000 records in the quality server.
    Kindly provide me some suggestions for performance tuning it.
        SELECT auart vkorg vtweg spart vkbur augru
                  kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
                  gwldt audat knumv
                  FROM vbak
                  INTO TABLE it_vbak
                  WHERE vbeln IN s_vbeln
                  AND erdat IN s_erdat
                  AND  auart IN s_auart
                  AND vkorg = p_vkorg
                  AND spart IN s_spart
                  AND vkbur IN s_vkbur
                  AND vtweg IN s_vtweg.
      IF NOT it_vbak[] IS INITIAL.
        SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
               yyequnr vbeln cuobj
               FROM vbap
               INTO TABLE it_vbap
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln
               AND   posnr = '000010'.
        SELECT bstkd inco1 zterm vbeln
               prsdt
               FROM vbkd
               INTO TABLE it_vbkd
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln.
        SELECT kbetr kschl knumv
               FROM konv
               INTO TABLE it_konv
               FOR ALL ENTRIES IN it_vbak
               WHERE knumv  =  it_vbak-knumv
               AND   kschl  =  'PN00'.
        SELECT vbeln parvw kunnr
               FROM vbpa
               INTO TABLE it_vbpa
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln
               AND parvw IN ('PE', 'YU', 'RE').
      ENDIF.
      LOOP AT it_vbap INTO wa_vbap.
        IF NOT wa_vbap-cuobj IS INITIAL.
          CALL FUNCTION 'VC_I_GET_CONFIGURATION'
               EXPORTING
                    instance            = wa_vbap-cuobj
                    language            = sy-langu
               TABLES
                    configuration       = it_config
               EXCEPTIONS
                    instance_not_found  = 1
                    internal_error      = 2
                    no_class_allocation = 3
                    instance_not_valid  = 4
                    OTHERS              = 5.
          IF sy-subrc = 0.
            READ TABLE it_config WITH KEY atnam  =  'IND_PRODUCT_LINES'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_GQ'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_VKN'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_ZE'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_HQ'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
        READ TABLE it_config WITH KEY atnam  =  'IND_CALCULATED_INST_HOURS'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
          ENDIF.
        ENDIF.
      ENDLOOP. " End of loop on it_vbap
    Edited by: jaya rangwani on May 11, 2010 12:50 PM
    Edited by: jaya rangwani on May 11, 2010 12:52 PM

    Hello Jaya,
    Will provide some point which will increase the performance of the program:
    1.     VBAK  & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
    2.     If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
    3.     Before using for all entries, check whether the internal table is not initial.
    And sort the internal table and delete adjacent duplicates.
    4.     Sort all the resultant internal table based on the required key fields and read always using the binary search.
    You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
    Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
    Regards,
    Selva K.

  • Performance tuning measures in B2B 11g

    Hi Gurus,
    I am working on SOA Suite 11.1.1.3. I wanted to know performance tuning measures which could be configured in our B2B production instance.
    Need some information on how performance tuning could be done in both outbound and inbound flow. Please suggest few solutions which you might have implemented in your projects
    Thanks in advance

    Hi,
    Performance and throughput depends on hardware ,software and designed architecture. In a very basic system, we are able to achieve more then 20 msg/second.
    If you want more details, drop a mail to me or our PM ([email protected]) .
    Rgds,
    Nitesh Jain
    [email protected]

  • Performance tuning - Hints

    Hi Experts,
    I am new to oracle database  developer. currently i am leraning the performance tuning.
    I have a question that is it a good practice to use hint as having using oracle database version 10g.
    some where i have read and also some Senior members from suggest me that before 9i the optimizer was rule based , hence there was nesecarry to use hint.
    after 9i onwards the optimeser is cost based hence there is no need to provide the hints.
    but i have seen some queries from the expert where they have used hints.(  even in db version 10g).
    so can you please tell me that if we wants to use hints then when to use them>\
    thanks in advance
    Prashant Mhatre

    > Senior members from suggest me that before 9i the optimizer was rule based
    No, the CBO was introduced in Oracle 7.
    > so can you please tell me that if we wants to use hints then when to use them>
    Hints can be handy during development for tuning and tweaking (but not twerking).
    In production code, hints can be poison.  Some of the worst performance problems I've seen were *caused* by inappropriate hints.
    Keep in mind I'm talking about "tuning hints".  There are some other hints (like insert /*+ append */) that fall into a totally different category.

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • Performance Tuning Tips

    Dear All,
    In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
    But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
    And also I have one more query that what is " Table Statistics " , what is the use of this ...
    Please give ur valueble suggestions to us in improving the performance .
    Thanks in Advance !

    Hi
    <b>Ways of Performance Tuning</b>
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    <b>Selection Criteria</b>
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    <b>Points # 1/2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Select Statements   Select Queries</b>
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    <b>Point # 1</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    <b>Point # 2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Point # 3</b>
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    <b>Point # 4</b>
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    <b>Point # 5</b>
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Select Statements           contd..  SQL Interface</b>
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    <b>Point # 1</b>
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    <b>Point # 2</b>
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    <b>Point # 3</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Select Statements       contd…           Aggregate Functions</b>
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    <b>Select Statements    contd…For All Entries</b>
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    <u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    <b>Select Statements    contd…  Select Over more than one Internal table</b>
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    <b>Point # 1</b>
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    <b>Point # 2</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    <b>Point # 3</b>
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    <b>Point # 2</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    <b>Point # 3</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    <b>Point # 5</b>
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    <b>Point # 6</b>
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    <b>Point # 7</b>
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 8</b>
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    <b>Point # 9</b>
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 10</b>
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 11</b>
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    <b>Point # 12</b>
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    <b>Internal Tables         contd…
    Hashed and Sorted tables</b>
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    <b>Point # 1</b>
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    <b>Point # 2</b>
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefull</b>

Maybe you are looking for