Performance optimization during database selection.

hi gurus,
pls any explain about this...
Strong knowledge of performance optimization during database selection.
regards,
praveen

Hi Praveen,
Performance Notes 
1.Keep the Result Set Small 
You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
Using the WHERE Clause
Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
Using the HAVING Clause
After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
Effect
If you use the WHERE and HAVING clauses correctly:
•     There are no more physical I/Os in the database than necessary
•     No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
•     The CPU usage of the database host is minimize
•     The network load is reduced, since only the data that is required by the application is transferred to the application server.
  Minimize the Amount of Data Transferred 
Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
Restrict the Number of Lines
If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
Restrict the Number of Columns
You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
Use Aggregate Functions
If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
Following an aggregate expression, only its result is transferred from the database.
Data Transfer when Changing Table Lines
When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers 
In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
Multiple Operations Instead of Single Operations
When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
Avoid Repeated Access
As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
Avoid Nested SELECT Loops
A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
ABAP Dictionary Views
You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
Joins in the FROM Clause
You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
Subqueries in the WHERE and HAVING Clauses
Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
Using Internal Tables
It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
Using a Cursor to Read Data
A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead 
You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
Database Indexes
Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
Formulating Conditions for Indexes
You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
Check for Equality and Link Using AND
The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
Use Positive Conditions
The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
Using OR
The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
Using Part of the Index
If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
Checking for Null Values
The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
Avoid Complex Conditions
Avoid complex conditions, since the statements have to be broken down into their individual components by the database system. 
Reduce the Database Load 
Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
Buffer Tables on the Application Server
You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
•     Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
•     Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
•     Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
When you read from buffered tables, the following happens:
1.     An ABAP program requests data from a buffered table.
2.     The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
3.     If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
4.     The database server passes the data to the application server, which places it in the table buffer.
5.     The data is passed to the program.
When you change a buffered table, the following happens:
1.     The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
2.     All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
3.     Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
You should buffer the following types of tables:
•     Tables that are read very frequently
•     Tables that are changed very infrequently
•     Relatively small tables (few lines, few columns, or short columns)
•     Tables where delayed update is acceptable.
Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
The SELECT statement bypasses the buffer when you use any of the following:
•     The BYPASSING BUFFER addition in the FROM clause
•     The DISTINCT addition in the SELECT clause
•     Aggregate expressions in the SELECT clause
•     Joins in the FROM clause
•     The IS NULL condition in the WHERE clause
•     Subqueries in the WHERE clause
•     The ORDER BY clause
•     The GROUP BY clause
•     The FOR UPDATE addition
Furthermore, all Native SQL statements bypass the buffer.
Avoid Reading Data Repeatedly
If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
Sort Data in Your ABAP Programs
The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
Use Logical Databases
SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
Work Processes 
Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
Structure of a Work Process
Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
Each work process contains two software processors and a database interface.
Screen Processor
In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
ABAP Processor
The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
Database Interface
The database interface provides the following services:
•     Establishing and terminating connections between the work process and the database.
•     Access to database tables
•     Access to R/3 Repository objects (ABAP programs, screens and so on)
•     Access to catalog information (ABAP Dictionary)
•     Controlling transactions (commit and rollback handling)
•     Table buffer administration on the application server.
The following diagram shows the individual components of the database interface:
The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
Types of Work Process
Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
Dialog Work Process
Dialog work processes deal with requests from an active user to execute dialog steps.
Update Work Process
Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
Background Work Process
Background work processes process programs that can be executed without user interaction (background jobs).
Enqueue Work Process
The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
Spool Work Process
The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
ABAP Application Server 
R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
Structure of an ABAP Application Server
The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
The following diagram shows the structure of an application server:
The individual components are:
Work Processes
An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
Dispatcher
Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
Gateway
Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
Shared Memory
All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running.  A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
Database Connection
When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
Dispatching Dialog Steps
The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
The following diagram is an example of how this might happen:
       1.      The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
       2.      The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
       3.      While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
       4.      After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
       5.      While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
From this example, we can see that:
•        A dialog step from a program is assigned to a single work process for execution.
•        The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
•        A work process can execute dialog steps of different programs from different users.
The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
Dispatching and the Programming Model
The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
Happy Reading...
shibu

Similar Messages

  • Performance Optimization during the Data Archiving Initial runs

    Dear Experts,
    We are running archiving jobs (Write Job & Preprocessing Jobs) which are running for nearly 56hrs. It is writting and analyzing nearly 4 million records.
    Any suggestions to improve the performance for running these jobs to complete in less time. Its ISU Archiving.
    I have gone through Many SAP Notes, but i'm unable to find related to improving performance during the archival runs. These are initial runs.
    Thank You,
    Mahesh C

    You should check the sql statements that are being executed by these runs.. Possibly by adding additional indexes you can speedup the process.
    Kind regards,
    Mark

  • Performance optimization during upgrade

    Please help to understand the use of the below max processes parameters during upgrade and the value should be based on what ?
    1) CRR
    2) Batch --> based on SM50 background processes ? , if yes do we need to have that many processes available in shadow or main instance ?
    3) SQL
    4) R3Load
    5) Parallel

    Hi,
    Please read the below note and i think that your douts would be clear.
    1616401 - Understanding parallelism during the Upgrades, EhPs and Support Packages implementations
    Thanks
    Rishi Abrol

  • Database migration to MAXDB and Performance problem during R3load import

    Hi All Experts,
    We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.
    Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).
    Details of import process as per below.
    We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.
    Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).
    We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.
    MAXDB parameters are set as per below.
    CACHE_SIZE 3407872
    MAXUSERTASKS 60
    MAXCPU 8
    MAXLOCKS 300000
    CAT_CACHE_SUPPLY 262144
    MaxTempFilesPerIndexCreation 131072
    We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc
    So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.
    Every input will be highly appreciated.
    Please let me know if I need to provide more details about import.
    Regards
    Santosh

    Hello,
    description of parameter:
    MaxTempFilesPerIndexCreation(from version 7.7.0.3)
    Number of temporary result files in the case of parallel indexing
    The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.
    as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE
    On the other side, do you know which task is time consuming? is it table import? index creation?
    maybe you can run migtime on import directory to find out
    Stanislav

  • TOP-OF-PAGE During line-selection in alv report

    Hi Expart
    In intractive Alv report u using top-of-page during line-selection . If u using so u give me one example with codeing .
    Regards
    Bhabani

    Hi,
    try this code...
    *& Report  ZCS_PRG8
    REPORT  Z_SJALV__PRG8.
    TYPE-POOLS: SLIS.
    *type declaration for values from ekko
    TYPES: BEGIN OF I_EKKO,
           EBELN LIKE EKKO-EBELN,
           AEDAT LIKE EKKO-AEDAT,
           BUKRS LIKE EKKO-BUKRS,
           BSART LIKE EKKO-BSART,
           LIFNR LIKE EKKO-LIFNR,
           END OF I_EKKO.
    DATA: IT_EKKO TYPE STANDARD TABLE OF I_EKKO INITIAL SIZE 0,
          WA_EKKO TYPE I_EKKO.
    *type declaration for values from ekpo
    TYPES: BEGIN OF I_EKPO,
           EBELN LIKE EKPO-EBELN,
           EBELP LIKE EKPO-EBELP,
           MATNR LIKE EKPO-MATNR,
           MENGE LIKE EKPO-MENGE,
           MEINS LIKE EKPO-MEINS,
           NETPR LIKE EKPO-NETPR,
           END OF I_EKPO.
    DATA: IT_EKPO TYPE STANDARD TABLE OF I_EKPO INITIAL SIZE 0,
          WA_EKPO TYPE I_EKPO .
    *variable for Report ID
    DATA: V_REPID LIKE SY-REPID .
    *declaration for fieldcatalog
    DATA: I_FIELDCAT TYPE SLIS_T_FIELDCAT_ALV,
          WA_FIELDCAT TYPE SLIS_FIELDCAT_ALV.
    DATA: IT_LISTHEADER TYPE SLIS_T_LISTHEADER.
    declaration for events table where user comand or set PF status will
    be defined
    DATA: V_EVENTS TYPE SLIS_T_EVENT,
          WA_EVENT TYPE SLIS_ALV_EVENT.
    declartion for layout
    DATA: ALV_LAYOUT TYPE SLIS_LAYOUT_ALV.
    declaration for variant(type of display we want)
    DATA: I_VARIANT TYPE DISVARIANT,
          I_VARIANT1 TYPE DISVARIANT,
          I_SAVE(1) TYPE C.
    *PARAMETERS : p_var TYPE disvariant-variant.
    *Title displayed when the alv list is displayed
    DATA:  I_TITLE_EKKO TYPE LVC_TITLE VALUE 'FIRST LIST DISPLAYED'.
    DATA:  I_TITLE_EKPO TYPE LVC_TITLE VALUE 'SECONDRY LIST DISPLAYED'.
    INITIALIZATION.
      V_REPID = SY-REPID.
      PERFORM BUILD_FIELDCATLOG.
      PERFORM EVENT_CALL.
      PERFORM POPULATE_EVENT.
    START-OF-SELECTION.
      PERFORM DATA_RETRIEVAL.
      PERFORM BUILD_LISTHEADER USING IT_LISTHEADER.
      PERFORM DISPLAY_ALV_REPORT.
    *&      Form  BUILD_FIELDCATLOG
          Fieldcatalog has all the field details from ekko
    FORM BUILD_FIELDCATLOG.
      WA_FIELDCAT-TABNAME = 'IT_EKKO'.
      WA_FIELDCAT-FIELDNAME = 'EBELN'.
      WA_FIELDCAT-SELTEXT_M = 'PO NO.'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
      WA_FIELDCAT-TABNAME = 'IT_EKKO'.
      WA_FIELDCAT-FIELDNAME = 'AEDAT'.
      WA_FIELDCAT-SELTEXT_M = 'DATE.'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
      WA_FIELDCAT-TABNAME = 'IT_EKKO'.
      WA_FIELDCAT-FIELDNAME = 'BUKRS'.
      WA_FIELDCAT-SELTEXT_M = 'COMPANY CODE'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    WA_FIELDCAT-TABNAME = 'IT_EKKO'.
      WA_FIELDCAT-FIELDNAME = 'BUKRS'.
      WA_FIELDCAT-SELTEXT_M = 'DOCMENT TYPE'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    WA_FIELDCAT-TABNAME = 'IT_EKKO'.
      WA_FIELDCAT-FIELDNAME = 'LIFNR'.
      WA_FIELDCAT-NO_OUT    = 'X'.
      WA_FIELDCAT-SELTEXT_M = 'VENDOR CODE'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    ENDFORM.                    "BUILD_FIELDCATLOG
    *&      Form  EVENT_CALL
      we get all events - TOP OF PAGE or USER COMMAND in table v_events
    FORM EVENT_CALL.
      CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
       EXPORTING
         I_LIST_TYPE           = 0
       IMPORTING
         ET_EVENTS             = V_EVENTS
    EXCEPTIONS
       LIST_TYPE_WRONG       = 1
       OTHERS                = 2
      IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDFORM.                    "EVENT_CALL
    *&      Form  POPULATE_EVENT
         Events populated for TOP OF PAGE & USER COMAND
    FORM POPULATE_EVENT.
      READ TABLE V_EVENTS INTO WA_EVENT WITH KEY NAME = 'TOP_OF_PAGE'.
      IF SY-SUBRC EQ 0.
        WA_EVENT-FORM = 'TOP_OF_PAGE'.
        MODIFY V_EVENTS FROM WA_EVENT TRANSPORTING FORM WHERE NAME =
    WA_EVENT-FORM.
      ENDIF.
      READ TABLE V_EVENTS INTO WA_EVENT WITH KEY NAME = 'USER_COMMAND'.
      IF SY-SUBRC EQ 0.
        WA_EVENT-FORM = 'USER_COMMAND'.
        MODIFY V_EVENTS FROM WA_EVENT TRANSPORTING FORM WHERE NAME =
    WA_EVENT-NAME.
      ENDIF.
    ENDFORM.                    "POPULATE_EVENT
    *&      Form  data_retrieval
      retreiving values from the database table ekko
    FORM DATA_RETRIEVAL.
      SELECT EBELN AEDAT BUKRS BSART LIFNR FROM EKKO INTO TABLE IT_EKKO.
    ENDFORM.                    "data_retrieval
    *&      Form  bUild_listheader
          text
         -->I_LISTHEADEtext
    FORM BUILD_LISTHEADER USING IT_LISTHEADER TYPE SLIS_T_LISTHEADER.
      DATA HLINE TYPE SLIS_LISTHEADER.
      HLINE-INFO = 'this is my first alv pgm'.
      HLINE-TYP = 'H'.
    ENDFORM.                    "build_listheader
    *&      Form  display_alv_report
          text
    FORM DISPLAY_ALV_REPORT.
      V_REPID = SY-REPID.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
       EXPORTING
         I_CALLBACK_PROGRAM                = V_REPID
      I_CALLBACK_PF_STATUS_SET          = ' '
         I_CALLBACK_USER_COMMAND           = 'USER_COMMAND'
         I_CALLBACK_TOP_OF_PAGE            = 'TOP_OF_PAGE'
         I_GRID_TITLE                      = I_TITLE_EKKO
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         = ALV_LAYOUT
         IT_FIELDCAT                       = I_FIELDCAT[]
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
        i_default                         = 'ZLAY1'
         I_SAVE                            = 'A'
        is_variant                        = i_variant
         IT_EVENTS                         = V_EVENTS
        TABLES
          T_OUTTAB                          = IT_EKKO
    EXCEPTIONS
      PROGRAM_ERROR                     = 1
      OTHERS                            = 2
      IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDFORM.                    "display_alv_report
    *&      Form  TOP_OF_PAGE
          text
    FORM TOP_OF_PAGE.
      CALL FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'
        EXPORTING
          IT_LIST_COMMENTARY       = IT_LISTHEADER
       i_logo                   =
       I_END_OF_LIST_GRID       =
    ENDFORM.                    "TOP_OF_PAGE
    *&      Form  USER_COMMAND
          text
         -->R_UCOMM    text
         -->,          text
         -->RS_SLEFIELDtext
    FORM USER_COMMAND USING R_UCOMM LIKE SY-UCOMM
    RS_SELFIELD TYPE SLIS_SELFIELD.
      CASE R_UCOMM.
        WHEN '&IC1'.
          READ TABLE IT_EKKO INTO WA_EKKO INDEX RS_SELFIELD-TABINDEX.
          PERFORM BUILD_FIELDCATLOG_EKPO.
          PERFORM EVENT_CALL_EKPO.
          PERFORM POPULATE_EVENT_EKPO.
          PERFORM DATA_RETRIEVAL_EKPO.
          PERFORM BUILD_LISTHEADER_EKPO USING IT_LISTHEADER.
          PERFORM DISPLAY_ALV_EKPO.
      ENDCASE.
    ENDFORM.                    "user_command
    *&      Form  BUILD_FIELDCATLOG_EKPO
          text
    FORM BUILD_FIELDCATLOG_EKPO.
      WA_FIELDCAT-TABNAME = 'IT_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'EBELN'.
      WA_FIELDCAT-SELTEXT_M = 'PO NO.'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
      WA_FIELDCAT-TABNAME = 'IT_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'EBELP'.
      WA_FIELDCAT-SELTEXT_M = 'LINE NO'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
      WA_FIELDCAT-TABNAME = 'I_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'MATNR'.
      WA_FIELDCAT-SELTEXT_M = 'MATERIAL NO.'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    WA_FIELDCAT-TABNAME = 'I_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'MENGE'.
      WA_FIELDCAT-SELTEXT_M = 'QUANTITY'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    WA_FIELDCAT-TABNAME = 'I_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'MEINS'.
      WA_FIELDCAT-SELTEXT_M = 'UOM'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    WA_FIELDCAT-TABNAME = 'I_EKPO'.
      WA_FIELDCAT-FIELDNAME = 'NETPR'.
      WA_FIELDCAT-SELTEXT_M = 'PRICE'.
      APPEND WA_FIELDCAT TO I_FIELDCAT.
      CLEAR WA_FIELDCAT.
    ENDFORM.                    "BUILD_FIELDCATLOG_EKPO
    *&      Form  event_call_ekpo
      we get all events - TOP OF PAGE or USER COMMAND in table v_events
    FORM EVENT_CALL_EKPO.
      CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
       EXPORTING
         I_LIST_TYPE           = 0
       IMPORTING
         ET_EVENTS             = V_EVENTS
    EXCEPTIONS
      LIST_TYPE_WRONG       = 1
      OTHERS                = 2
      IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    ENDFORM.                    "event_call_ekpo
    *&      Form  POPULATE_EVENT
           Events populated for TOP OF PAGE & USER COMAND
    FORM POPULATE_EVENT_EKPO.
      READ TABLE V_EVENTS INTO WA_EVENT WITH KEY NAME = 'TOP_OF_PAGE'.
      IF SY-SUBRC EQ 0.
        WA_EVENT-FORM = 'TOP_OF_PAGE'.
        MODIFY V_EVENTS FROM WA_EVENT TRANSPORTING FORM WHERE NAME =
    WA_EVENT-FORM.
      ENDIF.
      ENDFORM.                    "POPULATE_EVENT
    *&      Form  TOP_OF_PAGE
          text
    FORM F_TOP_OF_PAGE.
      CALL FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'
        EXPORTING
          IT_LIST_COMMENTARY       = IT_LISTHEADER
       i_logo                   =
       I_END_OF_LIST_GRID       =
    ENDFORM.                    "TOP_OF_PAGE
    *&      Form  USER_COMMAND
          text
         -->R_UCOMM    text
         -->,          text
         -->RS_SLEFIELDtext
    *retreiving values from the database table ekko
    FORM DATA_RETRIEVAL_EKPO.
    SELECT EBELN EBELP MATNR MENGE MEINS NETPR FROM EKPO INTO TABLE IT_EKPO.
    ENDFORM.
    FORM BUILD_LISTHEADER_EKPO USING I_LISTHEADER TYPE SLIS_T_LISTHEADER.
    DATA: HLINE1 TYPE SLIS_LISTHEADER.
    HLINE1-TYP = 'H'.
    HLINE1-INFO = 'CHECKING PGM'.
    ENDFORM.
    FORM DISPLAY_ALV_EKPO.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
      I_INTERFACE_CHECK                 = ' '
      I_BYPASSING_BUFFER                = ' '
      I_BUFFER_ACTIVE                   = ' '
       I_CALLBACK_PROGRAM                = V_REPID
      I_CALLBACK_PF_STATUS_SET          = ' '
      I_CALLBACK_USER_COMMAND           = 'F_USER_COMMAND'
       I_CALLBACK_TOP_OF_PAGE            = 'TOP_OF_PAGE'
      I_CALLBACK_HTML_TOP_OF_PAGE       = ' '
      I_CALLBACK_HTML_END_OF_LIST       = ' '
      I_STRUCTURE_NAME                  =
      I_BACKGROUND_ID                   = ' '
       I_GRID_TITLE                      = I_TITLE_EKPO
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         =
       IT_FIELDCAT                       = I_FIELDCAT[]
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
      I_DEFAULT                         =
       I_SAVE                            = 'A'
      IS_VARIANT                        =
       IT_EVENTS                         = V_EVENTS
      TABLES
        T_OUTTAB                          = IT_EKPO
    EXCEPTIONS
       PROGRAM_ERROR                     = 1
       OTHERS                            = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDFORM.
    reward if helpful
    regards
    Shashi

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Database selection with invalid cursor !

    hi experts,
        When  execute SAP BW processchar, it occur some system error: (sm21)
    Database selection with invalid cursor
    The database interface was called by a cursor (in a FETCH or CLOSE
    cursor operation) that is not flagged as opened. This can occur if a
    COMMIT or ROLLBACK was executed within a SELECT loop (which closes all
    opened cursors), followed by another attempt to access the cursor (for
    example, the next time the loop is executed).
    this error occur when apply bw support package 19.
    sap notes 1118584 Solution is: Import Support Package 17 . but my support package is 19.
    how can i solve this error?
    thanks,
    xwu.

    I am only assuming things, but it might be worth to look closely if you were experiencing an ORA- error during the execution. This could have caused a rollback and thus closed the cursor. Please check the job log, the workprocess trace (dev_wX file) and the system log SM21 and ST22 as well.
    Besides that check the oracle alertlog and the usertrace destination.
    Best regards, Michael

  • Error: Invalid interruption of a database selection

    Hi,
    When i execute the below code, the output is displayed without any problem.
    <b>tables lfa1.
    select * from lfa1 order by lifnr.
    write / lfa1-lifnr.
    endselect.</b>
    But however when i debug this piece of code, i get an error as below.
    ShrtText - Invalid interruption of a database selection
    Runtime Errors - DBIF_RSQL_INVALID_CURSOR
    Exceptn - CX_SY_OPEN_SQL_DB
    Can anybody help me with the reason for the occurence of this problem.

    Hi Vijay,
    Just go through the following link:
    Re: DBIF_RSQL_INVALID_CURSOR dump during debugging
    As Sooness pointed out, SELECT-ENDSELECT dumps only the first time. Try re-executing it and it will work fine.
    Meanwhile, it is always better to use INTO TABLE OF instead of SELECT-ENDSELECT, as this minimizes database hits.
    Regards
    Anil Madhavan

  • DBIF_RSQL_INVALID_CURSOR: Invalid interruption of a database selection.

    I have created an external program that extracts a very large table from SAP 7.0 through RFC using the NetWeaver SDK APIs. This program repeatedly calls an RFC-enabled function written in ABAP. On the first
    call the function does "OPEN CURSOR WITH HOLD" and a "FETCH". On each subsequent call the function does a "FETCH". The cursor is declared as "STATICS".
    When I run the program I am getting the following error:
    DBIF_RSQL_INVALID_CURSOR: Invalid interruption of a database selection.
    This error manifests itself in two ways.
    1) When time interval between RfcInvoke() calls is about 0.5 sec., it takes 30 sec. to 90 sec. for the error to come out.
    2) When time interval between RfcInvoke() calls is about 0.2 sec. or lower, it takes 8 min. to 17 min. for the error to come out, but sometimes the program completes successfully in 25 min.
    1) What is causing this error?
    2) If it has something to do with SAP configuration, how it be modified to extract this particular table.
    3) Is it possible in general to extract a table of any size (on the magnitude of gigabytes) using such an approach, that is, a "pull".
    4) If the answer is negative, what approach do you think is appropriate.
    Any help will be appreciated.

    Hi Andrew Coleman  ,
                                     This error sounds like problem is in the way you are calling the RFC Function module (n -number of times). Looks like during the subsequent calls the Select / ENDSELECT  is failing need to check that logic.
    Thanks,
    Greetson

  • Performance Score Card Database Configuration failed

    Hi Experts,
    Yesterday I started installing Hyeperion 11.1.1.3.0. I got some problems one them is Performance Score Card Database Configuration failed. I supplied following credentials during configuration: [Click here |http://www.imageping.com/show.php/60465_PerformanceScoreCard.jpg.html]. This is my second attempt, for the first attempt i gave hyperion_ss as database user name, If i used the same user name for the second time its alerting me as "Invalid Database Schema for the Product Performance Scorecard", so i created new user hyperion_ps. For both the users i granted create session, connect, and resource privileges. After i got Database Configuration Failed window i went to sql*plus and counted for number of table it gives 215. My database is Oracle 11g, Windows 2003 server. Please help me in solving this issue, please............
    Thanks in advance,
    -N-

    You are getting the message because:
    In the process it atrted creating the table and index on the schema but it was unable to complete that.
    solution:
    Delete the DB schema and then do the configuration once again.
    provide DBA access to the schema
    Also please do check if the data file for the tablespace in Oracle DB has reached the limit or not.
    Regards,
    Bish

  • How to improve the write performance of the database

    Our application is a write intense application, maybe will write 2M/second data to the database, how to improve the performance of the database? We mainly write to 5 tables of the database.
    Currently, the database get no response and the CPU is 100% used.
    How to tuning this? thanks in advance.

    Your post says more by what is not provided than by what is provided. The following is the minimum list of information needed to even begin to help you.
    1. What hardware (server, CPU, RAM, and NIC and HBA cards if any pointing to storage).
    2. Storage solution (DAS, iSCSCI, SAN, NAS). Provide manufacturer and model.
    3. If RAID which implementation of RAID and on how many disks.
    4. If NAS or SAN how is the read-write cache configured.
    5. What version of Oracle software ... all decimal points ... for example 11.1.0.6. If you are not fully patched then patch it and try again before asking for help.
    6. What, in addition to the Oracle database, is running on the server?
    2MB/sec. is very little. That is equivalent to inserting 500 VARCHAR2(4000)s. If I couldn't do 500 inserts per second on my laptop I'd trade it in.
    SQL> create table t (
      2  testcol varchar2(4000));
    Table created.
    SQL> set timing on
    SQL> BEGIN
      2    FOR i IN 1..500 LOOP
      3      INSERT INTO t SELECT RPAD('X', 3999, 'X') FROM dual;
      4    END LOOP;
      5  END;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.07
    SQL>Now what to do with the remaining 0.93 seconds. <g> And this was on a T61 Lenovo with a slow little 7500RPM drive and 4GB RAM running Oracle Database 11.2.0.1. But I will gladly repeat it using any currently supported version of the product.

  • System crash during database extension

    Hello!
    Unfortunatly we experienced a system crash during database extension.  The new datafile (DISKD0013) was only partly (1,4GB of 9GB) created, when the server went down.
    These are the error messages we get when trying to start the database:
    2008-01-22 13:59:26  7246 ERR 11000 page0io_ Illegal file format
    2008-01-22 13:59:26  7246 ERR 11000 page0io_ blocksize wanted: 8192, blocksize found: 0
    2008-01-22 13:59:26  7214 ERR 11000 vattach  dev0_vattach returned FALSE
    2008-01-22 13:59:26  7214 ERR 20027 IOMan    Attach error on data volume 13: Could not open volume
    2008-01-22 13:59:26  7214 ERR     3 Admin    Database state: OFFLINE
    2008-01-22 13:59:26  7214 ERR     6 KernelCo  +   Internal errorcode, Errorcode 9050 "disk_not_accessible"
    2008-01-22 13:59:26  7214 ERR 20017 Admin     +   RestartFilesystem failed with 'I/O error'
    2008-01-22 13:59:28     0 ERR 12009 DBCRASH  Kernel exited due to signal 0(Killed after timeout with state SERVER_KILL)
    2008-01-22 13:59:28                          ___ Stopping GMT 2008-01-22 12:59:28           7.6.01   Build 012-123-145-275
    Is there a manual way to correct this problem by removing the "corrupted" DISKD0013?
    Regards,
    Fredrik

    Hi Fredrik,
    you can either:
    a) remove all necessary parameters from the paramfile using the relevant dbmcli commands (param_directdel)
    b) perform a parameterfile recovery using dbmcli (param_restore)
    You're okay as long as nothing was written on the new volume.
    When using a) you need to make sure you delete all necessary parameter entries belonging to the datavolume (size, type etc.).
    When you're restoring to an old parameterfile, you need to make sure the parameterfile's only difference is the datavolume extension as the last change, otherwise you'd possibly reset more than you'd like. To check the parameterhistory, you can for example take a look at the <DBSID>.pah file in the /sapdb/data/config directory.
    If you're unsure what to do, i.e. if it's a productive system and you're an SAP customer: open an OSS message.
    Regards,
    Roland

  • How To Print Field Value in TOP-OF-PAGE During Line Selection.

    How To Print Field Value in TOP-OF-PAGE During Line Selection when double click on field.

    (If my memory serves me well (not used for long time ago)
    Assign values to system fields sy-tvar0 - sy-tvar9, they will replace the placeholders "&0" through "&9" in the list headers and column headers.
    TOP-OF-PAGE DURING LINE-SELECTION.
         WRITE: / 'Interactive Report &3'.
      WRITE record-vbeln TO sy-tvar3.
    Regards,
    Raymond

  • Get all Persons via Database select

    Hello all,
    if you ever need to get all Persons and its string values in an easy way directly from your database such as the xpath select
    " Select * from /Person" use following Database select against your FimService Database:
    --determine dynamically the columns for pivot
    DECLARE @cols AS NVARCHAR(MAX),
    @query  AS NVARCHAR(MAX)
    select @cols = STUFF((SELECT ',' + QUOTENAME(Name)
                        FROM [fim].[ObjectValueString] AS [o]
                        INNER JOIN [fim].[AttributeInternal] AS [ai]
                        ON [ai].[Key] = [o].[AttributeKey]
                        WHERE            
                        [o].[ObjectTypeKey] = 24   
                        group by Name FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'),1,1,'')
    set @query = N'SELECT ' + @cols + N' from
                 (SELECT        
                    [ai].[Name],
                    CAST ([o].[ValueString] AS NVARCHAR(MAX)) as Value,
                    ROW_NUMBER() over(partition by [ai].Name order by [o].[ValueString]) rn
                    FROM [fim].[ObjectValueString] AS [o]
                    INNER JOIN [fim].[AttributeInternal] AS [ai]
                    ON [ai].[Key] = [o].[AttributeKey]
                    WHERE       
                    [o].[ObjectTypeKey] = 24 --Type for Persons
                 ) x
                 pivot
                    Max(Value)
                    for Name in (' + @cols + N')
                 ) p '
    exec sp_executesql @query;

    You could create new stored procedures which do not use the additional search criteria.
    For example: sp_get_all_room_names()
    or: sp_get_all_room_numbers()
    Alternately, when searching by ints, your stored procedure could accept a max and a min value. Your Java class which calls this stored proc could intelligently search for one value (both min/max the same) or all values (pick decent min/max values).
    For example: sp_get_room_numbers(min, max)
    I hope this gives you a few ideas to run with.

  • Database selection with invalid cursor with MaxDB database

    Hi Experts,
    I encountered the this error:
    "Database selection with invalid cursor
    The database interface was called by a cursor (in a FETCH or CLOSE
    cursor operation) that is not flagged as opened. This can occur if a
    COMMIT or ROLLBACK was executed within a SELECT loop (which closes all
    opened cursors), followed by another attempt to access the cursor (for
    example, the next time the loop is executed)."
    We are using bw support package 19 early this month. Previously is working fine but this problem occured from the last 2 days.
    We are using MaxDB database.
    Really appreciate any speedy responds.

    Hi,
    We finally resolved the issue.
    The solution:
    We check the RFC connection test in SM59. There are connection error.
    There is an error that related J2EE_ADMIN user.
    SO we reset the J2EE_ADMIN id in SU01.
    The problem goes away.
    Many thanks

Maybe you are looking for