ABAP Performance Optimizer

dear abapers,
i have a problem in optimize abap code, especially report program.
is there any documentation / step by step guide / tips n tricks for best practise in abap performance optimizer ?
if there any, could you please send to my mail : [email protected]
many thanks in advance
regards
eddhie

Hi,
Take a look at the links below they have useful info and tips.
http://www.sapdevelopment.co.uk/perform/performhome.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
http://help.sap.com/saphelp_nw2004s/helpdata/en/6b/4b5ceb971afa469a02268df33f545f/content.htm
http://www.asug.com/client_files/Calendar/Upload/ACF3DBF.ppt#264,1,Slide 1
Cheers
VJ

Similar Messages

  • ABAP objects optimization

    Hi All,
    What are the best ways and different criterias to identify the ABAP objects for optimization?.
    Please share with me the details if anybody worked on ABAP optimization project.
    I appreciate any inputs on this regard.
    Thanks & Regards,
    Suresh

    Hi Renu,
    Thanks for the info.
    Between, SE30 gives the transaction execution run time  at that point of time. My concern here is, some programs/transactions run time may vary depending on certain criteria like, when the transaction being carrying out? (I.e, during month end time, daily morning), number of users accessing the same transaction, etc. Hence I would wish to understand:
    1. Average time consumed for the transaction/report execution in a stipulated time, say with in 1 month
    2. What are the different parameters we need to validate/consider for identifying the objects/reports/programs for performance optimization?
    FYI.. This requirement is, customer is seeking to tune up the performance of ABAP objects.
    Thanks & Regards,
    Suresh

  • Performance optimization during database selection.

    hi gurus,
    pls any explain about this...
    Strong knowledge of performance optimization during database selection.
    regards,
    praveen

    Hi Praveen,
    Performance Notes 
    1.Keep the Result Set Small 
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    •     There are no more physical I/Os in the database than necessary
    •     No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    •     The CPU usage of the database host is minimize
    •     The network load is reduced, since only the data that is required by the application is transferred to the application server.
      Minimize the Amount of Data Transferred 
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers 
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead 
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system. 
    Reduce the Database Load 
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    •     Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    •     Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    •     Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1.     An ABAP program requests data from a buffered table.
    2.     The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3.     If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4.     The database server passes the data to the application server, which places it in the table buffer.
    5.     The data is passed to the program.
    When you change a buffered table, the following happens:
    1.     The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2.     All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3.     Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    •     Tables that are read very frequently
    •     Tables that are changed very infrequently
    •     Relatively small tables (few lines, few columns, or short columns)
    •     Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    •     The BYPASSING BUFFER addition in the FROM clause
    •     The DISTINCT addition in the SELECT clause
    •     Aggregate expressions in the SELECT clause
    •     Joins in the FROM clause
    •     The IS NULL condition in the WHERE clause
    •     Subqueries in the WHERE clause
    •     The ORDER BY clause
    •     The GROUP BY clause
    •     The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes 
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    •     Establishing and terminating connections between the work process and the database.
    •     Access to database tables
    •     Access to R/3 Repository objects (ABAP programs, screens and so on)
    •     Access to catalog information (ABAP Dictionary)
    •     Controlling transactions (commit and rollback handling)
    •     Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server 
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running.  A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
           1.      The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
           2.      The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
           3.      While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
           4.      After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
           5.      While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    •        A dialog step from a program is assigned to a single work process for execution.
    •        The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    •        A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Performance optimization related.

    Hi.
    I am doing Performance optimization on code.
    Actually, I am doing performance optimization for old code where it is of JDK1.4 related. I met up with a doubt when I optimize code for JDK1.5.
    Problem statement:
    Collection errors = new ArrayList();
                errors.add(new GenericException(ErrorCodes.EMPLOYEE_INVALID_PERMISSION));
                setErrorsInRequest(request, errors);In the above code the compiler tells us to Parameterize the Collection type reference. If we don't make any parameterization for Collection type, will that be dealing with Performace issue?
    Please help me out to resolve the problem statement.
    Thanks and regards,
    Leslie V
    www.googlestepper.blogspot.com
    www.scrollnroll.blogspot.com

    If we don't make any parameterization for Collection type, will that be dealing with Performace issue?No. Not really. But performance isn't really the issue... it's runtime-type-safety which is at issue. There's nothing to prevent me from adding an Integer (like an error number) to your collection of exceptions.
    And "GenericException"... Sheesh, come down from the trees allready. WTF am I (the user of this class/method/package) supposed to with a friggin "GenericException"... you may as well have thrown a raw RuntimeException and saved all that cumbersom interveening try/catch code.

  • ABAP Performance standards good piece

    ABAP Performance Standards
    Following are the performance standards need to be following in writing ABAP programs:
    <b>1. Unused/Dead code</b>
    Avoid leaving unused code in the program. Either comment out or delete the unused situation. Use program --> check --> extended program to check for the variables, which are not used statically.
    <b>2. Subroutine Usage</b>
    For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called. For example:
    This is better:
    IF f1 NE 0.
    PERFORM sub1.
    ENDIF.
    FORM sub1.
    ENDFORM.
    Than this:
    PERFORM sub1.
    FORM sub1.
    IF f1 NE 0.
    ENDIF.
    ENDFORM.
    <b>3. Usage of IF statements</b>
    When coding IF tests, nest the testing conditions so that the outer conditions are those which are most likely to fail. For logical expressions with AND , place the mostly likely false first and for the OR, place the mostly likely true first.
    Example - nested IF's:
    IF (least likely to be true).
    IF (less likely to be true).
    IF (most likely to be true).
    ENDIF.
    ENDIF.
    ENDIF.
    Example - IF...ELSEIF...ENDIF :
    IF (most likely to be true).
    ELSEIF (less likely to be true).
    ELSEIF (least likely to be true).
    ENDIF.
    Example - AND:
    IF (least likely to be true) AND
    (most likely to be true).
    ENDIF.
    Example - OR:
    IF (most likely to be true) OR
    (least likely to be true).
    <b>4. CASE vs. nested Ifs</b>
    When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
    <b>5. MOVE statements</b>
    When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
    MOVE BSEG TO *BSEG.
    is better than
    MOVE-CORRESPONDING BSEG TO *BSEG.
    <b>6. SELECT and SELECT SINGLE</b>
    When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible. If the entire key can be qualified, code a SELECT SINGLE not just a SELECT. If you are only interested in the first row or there is only one row to be returned, using SELECT SINGLE can increase performance by up to three times.
    <b>7. Small internal tables vs. complete internal tables</b>
    In general it is better to minimize the number of fields declared in an internal table. While it may be convenient to declare an internal table using the LIKE command, in most cases, programs will not use all fields in the SAP standard table.
    For example:
    Instead of this:
    data: t_mara like mara occurs 0 with header line.
    Use this:
    data: begin of t_mara occurs 0,
    matnr like mara-matnr,
    end of t_mara.
    <b>8. Row-level processing and SELECT SINGLE</b>
    Similar to the processing of a SELECT-ENDSELECT loop, when calling multiple SELECT-SINGLE commands on a non-buffered table (check Data Dictionary -> Technical Info), you should do the following to improve performance:
    o Use the SELECT into <itab> to buffer the necessary rows in an internal table, then
    o sort the rows by the key fields, then
    o use a READ TABLE WITH KEY ... BINARY SEARCH in place of the SELECT SINGLE command. Note that this only make sense when the table you are buffering is not too large (this decision must be made on a case by case basis).
    <b>9. READing single records of internal tables</b>
    When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. This means that if the data is not sorted according to the key, the system must sequentially read the table. Therefore, you should:
    o SORT the table
    o use READ TABLE WITH KEY BINARY SEARCH for better performance.
    <b>10. SORTing internal tables</b>
    When SORTing internal tables, specify the fields to SORTed.
    SORT ITAB BY FLD1 FLD2.
    is more efficient than
    SORT ITAB.
    <b>11. Number of entries in an internal table</b>
    To find out how many entries are in an internal table use DESCRIBE.
    DESCRIBE TABLE ITAB LINES CNTLNS.
    is more efficient than
    LOOP AT ITAB.
    CNTLNS = CNTLNS + 1.
    ENDLOOP.
    <b>12. Performance diagnosis</b>
    To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs.
    <b>13. Nested SELECTs versus table views</b>
    Since releASE 4.0, OPEN SQL allows both inner and outer table joins. A nested SELECT loop may be used to accomplish the same concept. However, the performance of nested SELECT loops is very poor in comparison to a join. Hence, to improve performance by a factor of 25x and reduce network load, you should either create a view in the data dictionary then use this view to select data, or code the select using a join.
    <b>14. If nested SELECTs must be used</b>
    As mentioned previously, performance can be dramatically improved by using views instead of nested SELECTs, however, if this is not possible, then the following example of using an internal table in a nested SELECT can also improve performance by a factor of 5x:
    Use this:
    form select_good.
    data: t_vbak like vbak occurs 0 with header line.
    data: t_vbap like vbap occurs 0 with header line.
    select * from vbak into table t_vbak up to 200 rows.
    select * from vbap
    for all entries in t_vbak
    where vbeln = t_vbak-vbeln.
    endselect.
    endform.
    Instead of this:
    form select_bad.
    select * from vbak up to 200 rows.
    select * from vbap where vbeln = vbak-vbeln.
    endselect.
    endselect.
    endform.
    Although using "SELECT...FOR ALL ENTRIES IN..." is generally very fast, you should be aware of the three pitfalls of using it:
    Firstly, SAP automatically removes any duplicates from the rest of the retrieved records. Therefore, if you wish to ensure that no qualifying records are discarded, the field list of the inner SELECT must be designed to ensure the retrieved records will contain no duplicates (normally, this would mean including in the list of retrieved fields all of those fields that comprise that table's primary key).
    Secondly, if you were able to code "SELECT ... FROM <database table> FOR ALL ENTRIES IN TABLE <itab>" and the internal table <itab> is empty, then all rows from <database table> will be retrieved.
    Thirdly, if the internal table supplying the selection criteria (i.e. internal table <itab> in the example "...FOR ALL ENTRIES IN TABLE <itab> ") contains a large number of entries, performance degradation may occur.
    <b>15. SELECT * versus SELECTing individual fields</b>
    In general, use a SELECT statement specifying a list of fields instead of a SELECT * to reduce network traffic and improve performance. For tables with only a few fields the improvements may be minor, but many SAP tables contain more than 50 fields when the program needs only a few. In the latter case, the performace gains can be substantial. For example:
    Use:
    select vbeln auart vbtyp from table vbak
    into (vbak-vbeln, vbak-auart, vbak-vbtyp)
    where ...
    Instead of using:
    select * from vbak where ...
    <b>16. Avoid unnecessary statements</b>
    There are a few cases where one command is better than two. For example:
    Use:
    append <tab_wa> to <tab>.
    Instead of:
    <tab> = <tab_wa>.
    append <tab> (modify <tab>).
    And also, use:
    if not <tab>[] is initial.
    Instead of:
    describe table <tab> lines <line_counter>.
    if <line_counter> > 0.
    <b>17. Copying or appending internal tables</b>
    Use this:
    <tab2>[] = <tab1>[]. (if <tab2> is empty)
    Instead of this:
    loop at <tab1>.
    append <tab1> to <tab2>.
    endloop.
    However, if <tab2> is not empty and should not be overwritten, then use:
    append lines of <tab1> [from index1] [to index2] to <tab2>.
    Hope this will help you all in writing the ABAP program.<b></b>

    ABAP Performance Standards
    Following are the performance standards need to be following in writing ABAP programs:
    <b>1. Unused/Dead code</b>
    Avoid leaving unused code in the program. Either comment out or delete the unused situation. Use program --> check --> extended program to check for the variables, which are not used statically.
    <b>2. Subroutine Usage</b>
    For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called. For example:
    This is better:
    IF f1 NE 0.
    PERFORM sub1.
    ENDIF.
    FORM sub1.
    ENDFORM.
    Than this:
    PERFORM sub1.
    FORM sub1.
    IF f1 NE 0.
    ENDIF.
    ENDFORM.
    <b>3. Usage of IF statements</b>
    When coding IF tests, nest the testing conditions so that the outer conditions are those which are most likely to fail. For logical expressions with AND , place the mostly likely false first and for the OR, place the mostly likely true first.
    Example - nested IF's:
    IF (least likely to be true).
    IF (less likely to be true).
    IF (most likely to be true).
    ENDIF.
    ENDIF.
    ENDIF.
    Example - IF...ELSEIF...ENDIF :
    IF (most likely to be true).
    ELSEIF (less likely to be true).
    ELSEIF (least likely to be true).
    ENDIF.
    Example - AND:
    IF (least likely to be true) AND
    (most likely to be true).
    ENDIF.
    Example - OR:
    IF (most likely to be true) OR
    (least likely to be true).
    <b>4. CASE vs. nested Ifs</b>
    When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
    <b>5. MOVE statements</b>
    When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
    MOVE BSEG TO *BSEG.
    is better than
    MOVE-CORRESPONDING BSEG TO *BSEG.
    <b>6. SELECT and SELECT SINGLE</b>
    When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible. If the entire key can be qualified, code a SELECT SINGLE not just a SELECT. If you are only interested in the first row or there is only one row to be returned, using SELECT SINGLE can increase performance by up to three times.
    <b>7. Small internal tables vs. complete internal tables</b>
    In general it is better to minimize the number of fields declared in an internal table. While it may be convenient to declare an internal table using the LIKE command, in most cases, programs will not use all fields in the SAP standard table.
    For example:
    Instead of this:
    data: t_mara like mara occurs 0 with header line.
    Use this:
    data: begin of t_mara occurs 0,
    matnr like mara-matnr,
    end of t_mara.
    <b>8. Row-level processing and SELECT SINGLE</b>
    Similar to the processing of a SELECT-ENDSELECT loop, when calling multiple SELECT-SINGLE commands on a non-buffered table (check Data Dictionary -> Technical Info), you should do the following to improve performance:
    o Use the SELECT into <itab> to buffer the necessary rows in an internal table, then
    o sort the rows by the key fields, then
    o use a READ TABLE WITH KEY ... BINARY SEARCH in place of the SELECT SINGLE command. Note that this only make sense when the table you are buffering is not too large (this decision must be made on a case by case basis).
    <b>9. READing single records of internal tables</b>
    When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. This means that if the data is not sorted according to the key, the system must sequentially read the table. Therefore, you should:
    o SORT the table
    o use READ TABLE WITH KEY BINARY SEARCH for better performance.
    <b>10. SORTing internal tables</b>
    When SORTing internal tables, specify the fields to SORTed.
    SORT ITAB BY FLD1 FLD2.
    is more efficient than
    SORT ITAB.
    <b>11. Number of entries in an internal table</b>
    To find out how many entries are in an internal table use DESCRIBE.
    DESCRIBE TABLE ITAB LINES CNTLNS.
    is more efficient than
    LOOP AT ITAB.
    CNTLNS = CNTLNS + 1.
    ENDLOOP.
    <b>12. Performance diagnosis</b>
    To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs.
    <b>13. Nested SELECTs versus table views</b>
    Since releASE 4.0, OPEN SQL allows both inner and outer table joins. A nested SELECT loop may be used to accomplish the same concept. However, the performance of nested SELECT loops is very poor in comparison to a join. Hence, to improve performance by a factor of 25x and reduce network load, you should either create a view in the data dictionary then use this view to select data, or code the select using a join.
    <b>14. If nested SELECTs must be used</b>
    As mentioned previously, performance can be dramatically improved by using views instead of nested SELECTs, however, if this is not possible, then the following example of using an internal table in a nested SELECT can also improve performance by a factor of 5x:
    Use this:
    form select_good.
    data: t_vbak like vbak occurs 0 with header line.
    data: t_vbap like vbap occurs 0 with header line.
    select * from vbak into table t_vbak up to 200 rows.
    select * from vbap
    for all entries in t_vbak
    where vbeln = t_vbak-vbeln.
    endselect.
    endform.
    Instead of this:
    form select_bad.
    select * from vbak up to 200 rows.
    select * from vbap where vbeln = vbak-vbeln.
    endselect.
    endselect.
    endform.
    Although using "SELECT...FOR ALL ENTRIES IN..." is generally very fast, you should be aware of the three pitfalls of using it:
    Firstly, SAP automatically removes any duplicates from the rest of the retrieved records. Therefore, if you wish to ensure that no qualifying records are discarded, the field list of the inner SELECT must be designed to ensure the retrieved records will contain no duplicates (normally, this would mean including in the list of retrieved fields all of those fields that comprise that table's primary key).
    Secondly, if you were able to code "SELECT ... FROM <database table> FOR ALL ENTRIES IN TABLE <itab>" and the internal table <itab> is empty, then all rows from <database table> will be retrieved.
    Thirdly, if the internal table supplying the selection criteria (i.e. internal table <itab> in the example "...FOR ALL ENTRIES IN TABLE <itab> ") contains a large number of entries, performance degradation may occur.
    <b>15. SELECT * versus SELECTing individual fields</b>
    In general, use a SELECT statement specifying a list of fields instead of a SELECT * to reduce network traffic and improve performance. For tables with only a few fields the improvements may be minor, but many SAP tables contain more than 50 fields when the program needs only a few. In the latter case, the performace gains can be substantial. For example:
    Use:
    select vbeln auart vbtyp from table vbak
    into (vbak-vbeln, vbak-auart, vbak-vbtyp)
    where ...
    Instead of using:
    select * from vbak where ...
    <b>16. Avoid unnecessary statements</b>
    There are a few cases where one command is better than two. For example:
    Use:
    append <tab_wa> to <tab>.
    Instead of:
    <tab> = <tab_wa>.
    append <tab> (modify <tab>).
    And also, use:
    if not <tab>[] is initial.
    Instead of:
    describe table <tab> lines <line_counter>.
    if <line_counter> > 0.
    <b>17. Copying or appending internal tables</b>
    Use this:
    <tab2>[] = <tab1>[]. (if <tab2> is empty)
    Instead of this:
    loop at <tab1>.
    append <tab1> to <tab2>.
    endloop.
    However, if <tab2> is not empty and should not be overwritten, then use:
    append lines of <tab1> [from index1] [to index2] to <tab2>.
    Hope this will help you all in writing the ABAP program.<b></b>

  • Regarding performance optimization and tuning...

    hi all,
    <b>please provide me the performance tuning scenarios and parameters of an R/3 system with Oracle..</b>
    i heartly welcome all docs and pdf links or notes related to this issue..
    please provide ur suggestions at the earliest...
    expecting ur response..
    <i>Vineeth</i>

    Hello,
    there are many SAP Notes regarding performance issues. Here are just a couple of them:
    618868
    805934
    793113
    805934
    Please also have a look at the lists of the relating Notes at the end of each Note.
    But still much more effective would be to read the book of
    <a href="http://www.sap-press.de/katalog/buecher/titel/gp/titelID-1155?GalileoSession=66220888A2.lRCIISlE">T.Schneider Performance Optimization Guide</a>.
    It's the best performance tuning guide. The course ADM315 (or BC315?) ist also very helpful.
    Regards,
    Natalia

  • Performance Optimization Self Service- SAP help requirement

    Hi,
    I want to know whether for SAP's help is required for performing the self service of Performance Optimization.
    If we collect ST12 trace and use it to perform the self service then is the report which is generated from the self service sufficient to take further action or will I need some SAP expertise to implement / take corrective actions?
    In short, whether I can do the Performance Optimization by myself or I need help from SAP?
    Regards,
    Vishal

    hi,
    1) Is this service available to all the customer? (by all the customers I mean "Max Attention", "Enterprise Support" etc)
    i answer this above is it, from mz above reply, have you checked
    enterprise support customers can get five EGI sessions as free per year. please check
    http://service.sap.com/esacademy
    - click browse egis
    for your second question also I answered above
    Does the report itself gives suggestions or we need to provide the report to SAP
    here my reply above
    because Guided procedure itself the proven methodlogy from SAP, the report provides the lots of suggestions against the SAP best practices.
    you can use it yourself most of the time. if still you need expert guidance from SAP, book for EGI sessions. they called as expert guided implementations, remote support. duration might vary based on the session.
    again, service report is the source, you have to review yourself, if you are in EGI, sap use that report for guiding. Please review
    Thanks
    Jansi

  • Performance optimization

    Hi All,
    i am trying to learn sap basis. i want to know that what is performance optimization in sap basis and how it will be done. kindly help me. thanks in advance
    Regards,
    Subhash

    Hi Subhash,
    an answer to that question would easily fill a book, so I'd like to suggest one:
    SAP Performance Optimization Guide
    Thomas Schneider
    SAP Press
    ISBN 1-59229-069-8
    It's <b>the</b> performance bible and covers every aspect of performance optimization.
    Kind regards
    Dirk

  • How To Performance Optimize GRC Access Control 5.3

    Hi Everyone,
    GRC RIG published in BPX the following How-To-Guide:
    How To Performance Optimize GRC Access Control 5.3
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90aa3190-8386-2b10-c4ba-ced67322ea6d
    We appreciate any feedback and will keep track of all suggestions to improve future versions of this guide.
    Frank Rambo
    Director RIG EMEA
    SAP Governance, Risk and Complicance

    to access CUP:
    http://<hostname>:<portnumber>/AE
    To access RAR:
    http://<hostname>:<portnumber>/webdynpro/dispatcher/sap.com/grc~ccappcomp/ComplianceCalibrator
    to access SPM
    http://<hostname>:<port>/webdynpro/dispatcher/sap.com/grc~ffappcomp/Firefighter
    to access ERM:
    http://<hostname>:<portnumber>/RE
    Launch pad link:
    http://<server name>:5<instance>00/webdynpro/dispatcher/sap.com/grc~acappcomp/AC
    NW start page:
    http://<hostname>:<portnumber>/index.html

  • ABAP performance issues and improvements

    Hi All,
    Pl. give me the ABAP performance issue and improvement points.
    Regards,
    Hema

    Performance tuning for Data Selection Statement
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the
    length of the WHERE clause.
    The plus
    Large amount of data
    Mixing processing and reading of data
    Fast internal reprocessing of data
    Fast
    The Minus
    Difficult to program/understand
    Memory could be critical (use FREE or PACKAGE size)
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table
    Sorting the driver table
          If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
          FOR ALL ENTRIES IN i_tab
            WHERE mykey >= i_tab-low and
                  mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data
    Mixing processing and reading of data
    Easy to code - and understand
    The minus:
    Large amount of data
    when mixed processing isn’t needed
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data
    Similar to Nested selects - when the accesses are planned by the programmer
    In some cases the fastest
    Not so memory critical
    The minus
    Very difficult to program/understand
    Mixing processing and reading of data not possible
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
          The runtime analysis (SE30)
          SQL Trace (ST05)
          Tips and Tricks tool
          The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT
    ORDER BY / GROUP BY / HAVING clause
    Any WHERE clasuse that contains a subquery or IS NULL expression
    JOIN s
    A SELECT... FOR UPDATE
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.

  • [svn:fx-3.x] 5187: Merge of performance optimization from trunk ( revision 5183)

    Revision: 5187
    Author: [email protected]
    Date: 2009-03-06 06:07:40 -0800 (Fri, 06 Mar 2009)
    Log Message:
    Merge of performance optimization from trunk (revision 5183)
    Modified Paths:
    flex/sdk/branches/3.x/frameworks/projects/framework/src/mx/skins/RectangularBorder.as

  • Program terminated: Time limit exceeded, ABAP performance, max_wprun_time

    Hi,
    I am running an ABAP program, and I get the following short dump:
    Time limit exceeded. The program has exceeded the maximum permitted runtime and has therefore been terminated. After a certain time, the program terminates to free the work processfor other users who are waiting. This is to stop work processes being blocked for too long by
    - Endless loops (DO, WHILE, ...),
    - Database acceses with large result sets,
    - Database accesses without an apporpriate index (full table scan)
    - database accesses producing an excessively large result set,
    The maximum runtime of a program is set by the profile parameter "rdisp/max_wprun_time". The current setting is 10000 seconds. After this, the system gives the program a second chance. During the first half (>= 10000 seconds), a call that is blocking the work process (such as a long-running SQLstatement) can occur. While the statement is being processed, the database layer will not allow it to be interrupted. However, to stop the program terminating immediately after the statement has been successfully processed, the system gives it another 10000 seconds. Hence the maximum runtime of a program is at least twice the value of the system profile parameter "rdisp/max_wprun_time".
    Last error logged in SAP kernel
    Component............ "NI (network interface)"
    Place................ "SAP-Dispatcher ok1a11cs_P06_00 on host ok1a11e0"
    Version.............. 34
    Error code........... "-6"
    Error text........... "connection to partner broken"
    Description.......... "NiPRead"
    System call.......... "recv"
    Module............... "niuxi.c"
    Line................. 1186
    Long-running programs should be started as background jobs. If this is not possible, you can increase the value of the system profile parameter "rdisp/max_wprun_time".
    Program cannot be started as a background job. We have now identified two options to solve the problem:
    - Increase the value of the system profile parameter "rdisp/max_wprun_time"
    - Improve the performance of the following SELECT statement in the program:
    SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
    INTO CORRESPONDING FIELDS OF TABLE i_ekkn
    FOR ALL ENTRIES IN p_lt_proj
    WHERE ps_psp_pnr = p_lt_proj-pspnr
    AND ps_psp_pnr > 0.
    In EKKN we have 200 000 entries.
    Is there any other options we could try?
    Regards,
    Jarmo

    Thanks for your help, this problem seems to be quite challenging...
    In EKKN we have 200 000 entries. 199 999 entries have value of 00000000 in column ps_psp_pnr, and only one has a value which identifies a WBS element.
    I believe the problem is that there isn't any WBS element in PRPS which has the value of 00000000. I guess that is the reason why EKKN is read sequantially.
    I also tried this one, but it doesn't help at all. Before the SELECT statement is executed, there are 594 entries in internal table p_lt_proj_sel:
      DATA p_lt_proj_sel LIKE p_lt_proj OCCURS 0 WITH HEADER LINE.
      p_lt_proj_sel[] = p_lt_proj[].
      DELETE p_lt_proj_sel WHERE pspnr = 0.
      SORT p_lt_proj_sel by pspnr.
      SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
      INTO CORRESPONDING FIELDS OF TABLE i_ekkn
      FOR ALL ENTRIES IN p_lt_proj_sel
      WHERE ps_psp_pnr = p_lt_proj_sel-pspnr.
    I also checked that the index P in EKKN is active.
    Can I somehow force the optimizer to use the index?
    Regards,
    Jarmo

  • ABAP OO:Optimization Rules Manual

    Hi Guys,
    I have a few month programming in ABAP OO, but I feel that sometimes the code of the programs can be improved. Anybody knows if there's a manual for optimization in OO, or a Manual that shows you the right path that we must follow in order to take the advantages of the OO development.?
    Regards,
    Eric

    Hello Eric
    If you apply all the principles of OO-design and programming then you are on the right track. These principles include:
    - using <b>design patterns</b> (recommend reading: <i>"Design Patterns"</i> of the HeadFirst
    - <b>refactoring</b> (e.g. <i>"Refactoring. Improving the Design of Existing Code.: Improving the Design of Existing Code",</i> Addison-Wesley)
    - <i>"Domain-Driven Design. Tackling Complexity in the Heart of Software",</i> Addison-Wesley
    Regards
      Uwe

  • OO ABAP + Performance

    Hi all…
    I have recently started working in an organization as an ABAP programmer. Earlier, as part of my curriculum, I have designed a couple of systems using UML and coded the same in JAVA. Having worked on OO systems, I have a few questions to ask about ABAP and Web AS.
    1) Does a company implementing SAP 6.0/ECC 6 have to decide in advance whether it will be coding its business logic/applications in JAVA or ABAP? Or Wes AS can process both languages once installed.
    2) If a program is written in OO ABAP as well as in JAVA for a given business scenario (using the same OO (UML) structure) and executed on a Web AS, which one will perform better? Why?
    3) Why is it that organizations, in spite of the visible benefits of OO programming, are unwilling to let go of procedural ABAP and adapt OO ABAP? Anyways, they don’t have to convert their existing code but just to begin any new development using OO concepts. Is it because programs written in ABAP are usually not that complex to be modeled in OO concepts or there are not many people who know both ABAP and OO concepts.
    Regards,
    Anoop Sahu

    1) If a company is implementing ECC, it is all written in ABAP.  THe R/3(ECC) backend system is comprised entirely in ABAP,  so there is no decision,  if you want to develop an application in ECC(R/3), you use ABAP.  If you want to develop a java based application to access the ECC backend, you may do so using tools like jCo and JCA, and the webdynpro java environment supplies tools for you to access the backend.  You can not directly access the ABAP backend data/tables from a java application, you must use the data access tools mentioned.  Of course you need the java add on, or a standalone java application server to do this.
    3)  A lot of shops are used to the procedural ABAP model and have not yet had a need to implement an application using OO concepts,  there are advantages, but an application does not have to use it.  In some cases, it is a better way to go, but really it is all about the design,  using ABAP OO is really not worth it without a good design.  It is advantagious for ABAPers to imbraced ABAP OO concepts as this will be the design paradigm of future developments in ECC. In order to understand how a standard program works, they will need to know OO concepts.
    Regards,
    Rich Heilman

  • SOA FTP Adapter Performance Optimization

    The application scenario is as below
    There is a remote FTP service which receives real-time XML files (may be 2 files per second). In my SOA 11g application, I am using FTP adapter to connect to the remote FTP server. The FTP adapter polls for new files based on the file creation timestamp and after processing, deletes the files. All good so far, no issues at all.
    However, there is a huge backlog in the files. How can improve or speed up the FTP adapter process to match or better the remote FTP file creation rate? Please note I am not doing much processing here, just perist the files to a database. The consuming application expects real-time file feed.
    There should be some configuration in SOA FTP adapter to optimize the performance and beat the file spitting FTP monster.
    Thanks

    However, the inbound FTP location receives around 30 files in a second max. How do I configure the FTP adapter to beat this rate. I looked through the document link and tried few trials
    MaxRaiseSize: 20
    SingleThreadModel: both true and false
    ThreadCount:20
    However, I still cannot beat the rate at which the files are put to this location. Please what configuration changes shhould I consider.
    Thanks
    Edited by: user5108636 on 17/10/2010 21:34

Maybe you are looking for

  • Jtable with html in a cell

    Hi, Somebody can tell me how can i put a Jlabel with html in a Jtable cell. Is that possible ? Tks.

  • Trouble Creating Web Service from WSDL in 9.2

    I am following the guide at: http://edocs.bea.com/wls/docs92/webserv/setenv.html#wp214576 for creating a web service from wsdl using Ant. When i run the "wsdlc" task, it creates a Jar with source files but no class files. This seems odd. So, i have t

  • Image path broken after 4.1 to 4.2 upgrade

    Hi, Just upgraded several instances from 4.1 to 4.2.1. Running apex listener 2.0.0.354.17.05 on glassfish 3.1.2.2. Our db is 11.2.0.3.0 on centos 2.6.32-279.9.1.el6.x86_64 One of our upgrades has a very unusual problem. Our applications all run fine,

  • Transfering Info For Adobe Bridge To Aperture

    i am getting pictures worked with photoshop and tagged with metadata via adobe bridge and i was wondering if i could get the metadata information imported into aperture when i am importing the pictures. thanks for your help.

  • 1.5 TB drive not mounting on Tiger iBook

    I have a 2003 (or so) iBook that still works great - currently running Tiger 10.4.11. I also just got a new 1.5 TB drive, mounted in a metal gear box external case. I know the drive works, I was using it via eSATA all day today on my desktop. I know