Pin the table in memory

Dear Hussein,
I need to ping the table in momory (MTL_LOT_NUMBERS) , please let me know how to do it in 11g database?
REgds
Bilal

Sandeep,
I have 4764 in my MTL_LOT_NUMBERS, problem is when I am generating the lot no. the system hangs, I generated trace file for that session and in trace file I found :
call count cpu elapsed disk query current rows
Parse 173 0.03 0.03 0 1 0 0
Execute 733595 105.46 103.49 0 22 37 8
Fetch 733889 9.24 9.44 192 1729873 0 733808
total 1467657 114.73 112.97 192 1729896 37 733816
Misses in library cache during parse: 65
Misses in library cache during execute: 53
so I decided to pin MTL_LOT_NUMBERS in memory, please suggest
REgds
Bilal

Similar Messages

  • Table in memory

    Hi all,
    is there a possibility in 11g to load tables at database starttime from disk to memory and therefore all further "read"-access to this table will handled in memory?
    best regards
    georg

    It is possible to keep -especially small- tables in memory, by putting them into the keep pool.
    The steps to take are:
    - Determine size of the table
    - Adjust size of the keep-pool so the table fits into it, using:
    alter system set db_keep_cache_size=<size of table + 20%> scope=both;After tha 'pin' the table into the pool using:
    ALTER TABLE <table> STORAGE ( BUFFER_POOL KEEP );
    MAKE SURE that your SGA is large enough for the keep pool to be created. It is preffered to increase the SGA with the size the keep pool will become.
    In that case no other memory buffers will be reduced in size
    HTH
    FJFranken

  • Export a table to memory ID and import it in the Workflow.

    Hi,
    For a particular requirement,
    We are triggering Workflow from the User Exit MV45AFZZ.
    Before triggering this WF, I am trying to export an internal table to a memory ID and trying to Import it back from the same memory ID into an internal table in the Workflow task. This is failing.
    Even the name of internal tables and memory IDs are similar.
    Can we do this like this? If yes please let me know how to do this.
    Thanks & Regards,
    Mallika Maktala

    Initially,
    I have populated the values of the internal table in to a Database table, immediately called Workflow, in a task tried to retrieve the data from the database table.
    But though there is some data in the table, it dint retrieve any data, as at that point the table may not be updated.
    So now I used this Import export concept.
    I think its in same session, but still not working,
    Any solution?
    Mallika Maktala

  • It cannot reference the dynamic internal table in memory as an object.

    Hi,
    I am getting the syntax error in the second select. I guest it cannot reference the dynamic internal table in memory as an object.
    The internal table contains different fields from multiple tables and it gets created OK. Then the first select within the loop executes OK allowing me to read the multiple tables in ITABLES.
    * Create dynamic internal table and assign to FS
      call method cl_alv_table_create=>create_dynamic_table
        EXPORTING
          it_fieldcatalog = ifc
        IMPORTING
          ep_table        = dy_table.
    ***OK, the dynamic tables is created here
      assign dy_table->* to <dyn_table>.
    * Create dynamic work area and assign to FS
      create data dy_line like line of <dyn_table>.
      assign dy_line->* to <dyn_wa>.
    loop at ITABLES.
    ***OK, no syntax errors in this select here
      select * appending corresponding fields of table <dyn_table>
                 from (ITABLES-TABNAME).
    endloop.
    data: ikonp like konp occurs 0 with header line.
    ***NOT OK, there is syntax errors
      select * into table ikonp
      from KONP for all entries in <dyn_table>
      where knumh = <dyn_table>-knumh.
    Some of the tables in ITABLES are pooled tables which does not allow me to use INNER JOINS. Therefore I need a second select in order to read the pricing table KONP.
    Thanks in advance for any hint.

    Hi Abel,
    You must be getting the syntax error that <dyn_table> does not contain the field knumh.
    try putiing the entire where clause in a char type variable say wa_char and then use in ur query as
    where (wa_char) .. it may work..
    concatenate 'knumh' '=' '<dyn_table>-knumh' INTO wa_char SEPARATED BY SPACES.
    SELECT ... from konp...
    where (wa_char).
    Revert if it doesnt work.

  • SAP paging overflow when storing data in the ABAP/4 memory.

    I am trying to create a data source in  BI7.0 in the Datawarehousing Workbench. But along the process when i need to select a view i get an error detailed in the following error file extract: Please go through and assist.
    untime Errors         MEMORY_NO_MORE_PAGING
    Date and Time          06.06.2009 14:21:35
    Short text
    SAP paging overflow when storing data in the ABAP/4 memory.
    What happened?
    The current program requested storage space from the SAP paging area,
    but this request could not be fulfilled.
    of this area in the SAP system profile.
    What can you do?
    Note which actions and input led to the error.
    For further help in handling the problem, contact your SAP administrator
    You can use the ABAP dump analysis transaction ST22 to view and manage
    termination messages, in particular for long term reference.
    Error analysis
    The ABAP/4 runtime system and the ABAP/4 compiler use a common
    interface to store different types of data in different parts of
    the SAP paging area. This data includes the
    ABAP/4 memory (EXPORT TO MEMORY), the SUBMIT REPORT parameters,
    CALL DIALOG and CALL TRANSACTION USING, as well as internally defined
    macros (specified with DEFINE).
    To store further data in the SAP paging area, you attempted to
    allocate a new SAP paging block, but no more blocks were
    available.
    When the SAP paging overflow occurred, the ABAP/4 memory contained
    entries for 20 of different IDs.
    Please note:
    To facilitate error handling, the ABAP/4 memory was
    deleted.
    How to correct the error
    The amount of storage space (in bytes) filled at termination time was:
    Roll area...................... 8176
    Extended memory (EM)........... 13587912
    Assigned memory (HEAP)......... 0
    Short area..................... " "
    Paging area.................... 40960
    Maximum address space.......... " "
    By calling Transaction SM04 and choosing 'Goto' -> 'Block list',
    you can display an overview of the current roll and paging memory
    levels resulting from active users and their transactions. Try to
    decide from this whether another program requires a lot of memory
    space (perhaps too much).
    The system log contains more detailed information about the
    termination. Check for any unwanted recursion.
    Determine whether the error also occurs with small volumes of
    data. Check the profile (parameter "rdisp/PG_MAXFS", see
    Installation Guidelines).
    Is the disk or the file system that contains the paging file
    full to the extent that it cannot be increased, although it has
    not yet reached the size defined in the profile? Is the
    operating system configured to accommodate files of such a
    size?
    The ABAP processor stores different types of data in the SAP
    paging area. These include:
    (1) Data clusters (EXPORT ... TO MEMORY ...)
    (2) Parameters for calling programs (SUBMIT REPORT ...),
    Dialog modules (CALL DIALOG ...) and transactions
    (CALL TRANSACTION USING ...)
    (3) Internally defined program macros (DEFINE ...)
    Accordingly, you should check the relevant statements in a program
    that results in an overflow of the SAP paging area.
    It is critical when many internal tables, possibly with
    different IDs, are written to memory (EXPORT).
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "MEMORY_NO_MORE_PAGING" " "
    "SAPLWDTM" or "LWDTMU20"
    "TABC_ACTIVATE_AND_UPDATE"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.

    Hi Huggins,
    Maintenance of the Paging File is owned by your basis team.
    They should increase this in order for your transaction to process successfully.
    Just for your reference, in case the OS used is windows server 2003, paging file value can be checked through;
    Right click in the My Computer&gt;properties.
    Then go to Advance tab;
    Then there should be a performance section, click the settings
    Then Advance tab again. The paging file can be seen from there.
    (and can be adjusted from there also)
    The value of the paging file in general will be dependent with the available RAM from the hardware.
    Hope this helps. Thanks a lot.
    - Jeff

  • How to download the data which is in the table?

    how to download the data which is in the table?
    every field data in the table i want to download and once the download is finished then i have to set the flag as 'download is finished ' as one field in table?
    can any one help me in this.
    Phani.
    Edited by: phani kumarDurusoju on Jan 9, 2008 6:36 AM

    One way is to Download the data Directly from the database table using the path SE11->Give table name ->Execute -> system ->List ->Save ->Local File
    There u can downlaad the data .
    The ither way is to use the code
    The Following Code will be helpfull to You
    Data :ITAB  TYPE TRUXS_T_TEXT_DATA,
            FILE  TYPE STRING.
             C_ASC     TYPE CHAR10   VALUE 'ASC',
    DATA: L_STATUS TYPE C,
           L_MESSAGE TYPE PMST_RAW_MESSAGE,
           L_SUBJECT TYPE SO_OBJ_DES.
    DATA: L_FILELENGTH TYPE I.
      PERFORM download_to_pc
                  TABLES
                     itab
                  USING
                     filename
                     c_asc
                     c_x
                  CHANGING
                     l_status
                     l_message
                     l_filelength.
    FORM DOWNLOAD_TO_PC TABLES   DOWNLOADTAB
                        USING    FILENAME
                                 FILETYPE TYPE CHAR10
                                 DELIMITED
                        CHANGING STATUS
                                 MESSAGE TYPE PMST_RAW_MESSAGE
                                 FILELENGTH TYPE I.
      DATA: L_FILE TYPE STRING,
            L_SEP.
      L_FILE = FILENAME.
      IF NOT DELIMITED IS INITIAL.
        L_SEP = 'X'.
      ENDIF.
      STATUS = 'S'.
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          FILENAME                = L_FILE
          FILETYPE                = FILETYPE
          WRITE_FIELD_SEPARATOR   = L_SEP
        IMPORTING
          FILELENGTH              = FILELENGTH
        TABLES
          DATA_TAB                = DOWNLOADTAB
        EXCEPTIONS
          FILE_WRITE_ERROR        = 1
          NO_BATCH                = 2
          GUI_REFUSE_FILETRANSFER = 3
          INVALID_TYPE            = 4
          NO_AUTHORITY            = 5
          UNKNOWN_ERROR           = 6
          HEADER_NOT_ALLOWED      = 7
          SEPARATOR_NOT_ALLOWED   = 8
          FILESIZE_NOT_ALLOWED    = 9
          HEADER_TOO_LONG         = 10
          DP_ERROR_CREATE         = 11
          DP_ERROR_SEND           = 12
          DP_ERROR_WRITE          = 13
          UNKNOWN_DP_ERROR        = 14
          ACCESS_DENIED           = 15
          DP_OUT_OF_MEMORY        = 16
          DISK_FULL               = 17
          DP_TIMEOUT              = 18
          FILE_NOT_FOUND          = 19
          DATAPROVIDER_EXCEPTION  = 20
          CONTROL_FLUSH_ERROR     = 21
          OTHERS                  = 22.
      IF SY-SUBRC <> 0.
        STATUS = 'E'.
        CASE SY-SUBRC.
          WHEN 1.
            MESSAGE = 'gui_download::file write error'.
          WHEN 2.
            MESSAGE = 'gui_download::no batch'.
          WHEN 3.
            MESSAGE = 'gui_download::gui refuse file transfer'.
          WHEN 4.
            MESSAGE = 'gui_download::invalid type'.
          WHEN 5.
            MESSAGE = 'gui_download::no authority'.
          WHEN 6.
            MESSAGE = 'gui_download::unknown error'.
          WHEN 7.
            MESSAGE = 'gui_download::header not allowed'.
          WHEN 8.
            MESSAGE = 'gui_download::separator not allowed'.
          WHEN 9.
            MESSAGE = 'gui_download::filesize not allowed'.
          WHEN 10.
            MESSAGE = 'gui_download::header too long'.
          WHEN 11.
            MESSAGE = 'gui_download::dp error create'.
          WHEN 12.
            MESSAGE = 'gui_download::dp error send'.
          WHEN 13.
            MESSAGE = 'gui_download::dp error send'.
          WHEN 14.
            MESSAGE = 'gui_download::ubknown dp error'.
          WHEN 15.
            MESSAGE = 'gui_download::access denied'.
          WHEN 16.
            MESSAGE = 'gui_download::dp out of memory'.
          WHEN 17.
            MESSAGE = 'gui_download::disk full'.
          WHEN 18.
            MESSAGE = 'gui_download::dp timeout'.
          WHEN 19.
            MESSAGE = 'gui_download::file not found'.
          WHEN 20.
            MESSAGE = 'gui_download::dataprovider exception'.
          WHEN 21.
            MESSAGE = 'gui_download::control flush error'.
          WHEN 22.
            MESSAGE = 'gui_download::Error'.
        ENDCASE.
      ENDIF.
    ENDFORM.             "download_to_pc
    At The End Reward points.
    Please it's Required.
    Thanks ,
    Rahul

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • The danger of memory target in Oracle 11g - request for discussion.

    Hello, everyone.
    This is not a question, but kind of request for discussion.
    I believe that many of you heard something about automatic memory management in Oracle 11g.
    The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
    But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
    So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
    UKJA@ukja116> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    -- Configuration
    *.memory_target=350m
    *.memory_max_target=350m
    create table t1(c1 int, c2 char(100));
    create table t2(c1 int, c2 char(100));
    insert into t1 select level, level from dual connect by level <= 10000;
    insert into t2 select level, level from dual connect by level <= 10000;
    -- First 10053 trace
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';
    -- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
      vc       sys_refcursor;
      vs        varchar2(1000);
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 10000000 loop
        execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
              into va;
        if mod(idx, 1000) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it!');
            exit;
          end if;
        end if;
      end loop;
    end;
    -- As to alert log file,
    25000th execution
    26000th execution
    27000th execution
    28000th execution
    29000th execution
    30000th execution
    yep, I got it! <-- the pga target changed with 30000th hard parse
    -- Second 10053 trace for same query
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';With above test case, I found that
    1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
    2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
    -- First 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 11468 KB
    _smm_px_max_size                    = 28672 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    -- Second 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 13107 KB
    _smm_px_max_size                    = 32768 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
    I believe that this is a desinged behavior, but the negative side effect is not negligible.
    I just like to hear your opinions on this behavior.
    Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
    *.memory_target=200m
    *.memory_max_target=200m
    create table t3(c1 int, c2 char(1000));
    insert into t3 select level, level from dual connect by level <= 50000;
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 1000000 loop
        -- try many patterns here!
        execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
        if mod(idx, 100) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          for p in (select ksppinm, ksppstvl
              from sys.xm$ksppi i, sys.xm$ksppcv v
              where i.indx = v.indx
              and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
              sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
          end loop;
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
            exit;
          end if;
        end if;
      end loop;
    end;
    /This test case showed expected and reasonable result, like following:
    100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    300th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    400th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    500th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 37748736
    __pga_aggregate_target = 58720256
    yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
    (I'm still in dark age on this automatic memory target management of 11g. More research in need!)
    I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

  • Export internal table to memory in User Exit FM

    Hi all,
    My scenario here is to export an internal table in one user exit FM and import it back in another user exit FM.
    I was trying to use
    Export lt_table to memory id 'LABEL'.
    then
    Import lt_table from memory id 'LABEL'.
    But then i hit error in the import statement. How can I rectify this?
    Thanks. Answer will be rewarded.

    Refer to the below related threads
    Export an internal table to memory and import from memory into an internal
    http://help.sap.com/saphelp_erp2005/helpdata/en/fc/eb3bf8358411d1829f0000e829fbfe/frameset.htm
    Regards,
    Santosh

  • Nothing added to the table of contents

    I'm using the "Biography" template (portrait only) in iBooks Author. According to Help,
    "By default, the table of contents includes all paragraphs in the book that use the Heading 1 and Heading 2 style, but you can include whichever styles you like."
    So I paste some text into the first two pages of my new book. I select a line of text, show the Styles drawer, and click Heading 1 or Heading 2.
    I choose Table of Contents from the sidebar, and there's nothing added to the table of contents. I preview it on my iPad. Same problem.
    Am I doing something wrong?

    Hello Nick,
    Jquery may be help you to create own TOC and add custom links into it. Here is some ref for you:
    http://tommdaly.wordpress.com/2013/06/27/customize-table-of-contents-remove-libraries-navigation-headers/
    http://sharepointsharpener.wordpress.com/2011/09/27/quick-fix-remove-libraries-lists-from-toc/
    You may also create one custom list and add link to this list. Then get link from this list and place in webpart wherever you want.
    Hope it could help
    Hemendra:Yesterday is just a memory,Tomorrow we may never see
    Please remember to mark the replies as answers if they help and unmark them if they provide no help

  • Why do the table borders in MS Word appear thicker in PDF?

    Hi,
    I am trying to covert a MS Word document to PDF using Adobe Acrobat 6.0 Professional. The word document contains tables with thick (1.5 points) outer border and thin (1 point) cell borders. When I convert this to PDF, all the borders in the tables appear of the same thickness and thicker than what they are in Word.
    I request your kind help in figuring out how to convert it to PDF so that the tables retain the original formatting done in Word.
    System info:
    OS: Wondows XP Professional, Version 2002, Service Pack 2
    Processor: Intel Pentium M (1.70 Ghz)
    Ram: 512 MB
    With many thanks,
    Vinay

    One thing that you might want to try is to change the printer resolution to 300 or 600, rather than the default 1200 dpi. I am working from memory, but it is under the advanced tab I think and you will find a screen that appears to be a list of items. One is the print resolution or such. The default on that is 1200dpi and it is often a source of this problem.
    It may be also that WORD and Acrobat extrapolate for the screen differently. So what you see in WORD may actually not be what is being produced.
    Not sure I much more help to give, but see if that leads you somewhere. If you can't find it, let me know and I will look on my other machine.

  • PL/SQL cursor question (possible to pass in the table name?)

    I want to pass the name of a table to a cursor and select from that,
    Is this possible in PL/SQL?
    CURSOR get_rows(table_name IN VARCHAR2) IS
    SELECT * FROM table_name;
         The actual sql statement that will be used is more complex.
         I have 18 tables for which the same query would work varrying only in the name of the table.
    I get an error when I try to do the above.

    >
    I am not quite sure how you can derive anything from the design given the example that has all other details stripped out
    >
    What Justin said is pretty clear
    >
    If you are really doing a SELECT * and you have 18 different tables, it seems unlikely that all tables would always have exactly the same set of columns. If they do, that is almost certainly an indication that the data model is incorrect.
    >
    A data model that includes 18 tables with the same set of columns has something wrong with it. You explained that by clarifying that it is only TWO columns in each of the 18 tables that is the same. Perhaps the tables have one-to-many relationships and the two columns are the key columns. That is different.
    >
    The 'select *' will actually pull back only two columns, both of which are primary keys (long). I need to select all rows into a data structure that is global that I can iterate through at a later time. So given that I can use a ref cursor and have a dynamic table name, how would I select into a data structure that is of two primary keys and make that global to be used in another method at a later time?
    >
    Why not use an actual table to store the column data? An index-organized table would be perfect for that use and won't use up valuable user memory.
    And by global do you mean the data needs to be accessible by a different session than the one that collected the data?
    And what do you mean by 'later'? Tomorrow or next week?
    The use case you describe sounds like you are having to update primary key values in tables that have parent-child foreign key constraints. Is that what you are doing? Can you do this offline and defer or disable the constraints, perform an update (rather than insert/delete) and then reenable the constraints?

  • Exporting an internal table to memory

    Hi
      I want to call an program from another program and want the values stored in an internal table used in the called program.
    how can i export an internal table to memory id an then import it.
    Regards
    Arun

    So to be clear, for your requirement it would be:
    * Towards the end of the called program
      EXPORT it_itab TO MEMORY ID 'ZZ_MEM_ID'.
    * And then from calling program after the submit zzzz and return statement
      IMPORT it_itab FROM MEMORY ID 'ZZ_MEM_ID'.
    Hope that helps.
    Brad
    Message was edited by: Brad Williams (put back intro text)
    Message was edited by: Brad Williams

  • Save Internal table to memory - problem

    Hi everybody ,
    I need to extract one Internal table to memory , every 1st in the Month .
    Then every day I will be reading this table for my report ( in order to avoid running every day same and same database selections) .
    My originall idea wos with EXPORT TO MEMORY ID , AND IMPORT
    but it does'nt work next day ... With Export data is cept only untill tthe end of the transaction ...
    Is there any way except of storing data in one Z_ database table ???
    Thanks in advance

    Hi,
    You can use the following code to export to memory
    *data variable required for background processing
    data: wa_indx type indx.
    *EXPORT Internal Table TO MEMORY ID 'XYZ'.
    *part for background processing
      export tab = <your table> to database indx(xy) from wa_indx client
      sy-mandt id 'XYZ'.
    the following code will import from Memory and clear memory
    *data variable required for background processing
    data: wa_indx type indx.
    imports from database the list sent by the calling program
    IMPORT tab = <your table> FROM DATABASE indx(xy) TO wa_indx CLIENT sy-mandt
    ID 'XYZ'.
    deletes the data to save wastage of memory
    DELETE FROM DATABASE indx(xy)
      CLIENT sy-mandt
      ID 'XYZ'.
    Regards,
    Samson Rodrigues.

  • Pass Internal Table to Memory

    Dear Experts.
    I have the following problem.
    I am using a User-Exit for Travel  that is called from a program standard with use the T.Code TRIP. The program standard use a structure that when entry in the user exit not is. I need use this structure in the user exit. (p_t_req_head).
    I had read abour SAP and ABAP Memory but from the program standar How can Pass Internal Table to Memory SAP or ABAP from the program standard to user-exit?
    When I am doing debugging in the program Standard and this call to the user exit, I use ()namestructure for example ()p_t_req_head  and I get the datas of the structure, But In the Source Code ABAP, How can do this with instructions?
    Regards

    Hi,
    You try with Import & Export Statements as follows ---
    DATA : t_itab LIKE TABLE OF spfli.
    EXPORT t_itab TO MEMORY ID 'ABCD'.
    After sending the internal table to ABAP memory you need to get that by IMPORT in the called session.
    IMPORT t_itab FROM MEMORY ID 'ABCD'.

Maybe you are looking for

  • How do i set the timeout value?

    I'm doing an insert...in the insert i have a trigger that fires a http put. when i'm doing a couple inserts at a time, it works fine. However if i try to insert about 50 at one time (within msecs of each other) then i get the following error: ORA-292

  • Can we change the prompt of textfield using personalization

    Hi, I want to change the prompt of a textfield using personalization. I changed the prompt using personalization, but still it is showing the old value. Also I changed the rendeered prompt of a field to false and still that filed is coming up in the

  • Does it have video

    Is the headphone jack a full A/V jack or is it just headphones. I can not find the answer anywhere in the specs.

  • No response from server HTTP 500

    Hi all, I am new to Widget world (in fact this is the first one). I am trying to implement Job Monitoring widget as explained in the weblog: /people/sergio.cipolla/blog/2007/02/19/widgets-benefits-jobs-monitor-video I have created wrapper function mo

  • Error on openoffice-base installation

    Hello All, i have a problem with the System updates when i use pacman -Syu it give the error that file is or are  corrupted but now i update system  one by one installation. now the main problem is that the openoffice is not function well so  remove