Sys tables from flashback data archive?

Hi,
I am prototyping flashback data archive and I have been following some tutorials.
When I query
user_flashback_archive_tables
I can see the table for which I have configured FBDA. I can see also the name of the History table, like SYS_FBA_HIST_xxxxx (xxxxxx being the object_id of the table).
But I just cannot find the table in USER_TABLES or USER_TAB_PARTITIONS.
The history seems to be getting stored... The sql-developer display the history in the "flashback" tab. But why aren't the SYS_FBA_HIST_xxxx tables getting created?
(I am using 11gR2 running on EL5 Linux)

The SYS_FBA_xxx belongs to SYS user. To be able to see them you must query DBA_TABLES and have SELECT privilege on DBA_TABLES or role SELECT_CATALOG_ROLE.You can also just try to connect as SYS:
SQL> select table_name from dba_tables where table_name like 'SYS_FBA%';
TABLE_NAME
SYS_FBA_HIST_73563
SYS_FBA_TCRV_73563
SYS_FBA_DDL_COLMAP_73563
SYS_FBA_DL
SYS_FBA_USERS
SYS_FBA_PARTITIONS
SYS_FBA_TRACKEDTABLES
SYS_FBA_BARRIERSCN
SYS_FBA_TSFA
SYS_FBA_FA
10 rows selected.Edited by: P. Forstmann on 7 oct. 2010 20:52

Similar Messages

  • How to backup flashback data archive enabled tables -oracle 11g?

    1) Whether the flashback data archive can be backed up using the following options in oracle? Any pointer from the documentation would be much appreciated.
         RMAN
         Data pump
    2) How to move the flashback data archive table from one database to another? The reason why I am asking this question is that the history data for flashback data archive table is managed by undo tablespace. In this case, if I simply export the flashback data archive enabled tables to another database, will I able to view the history information?
    Regards,
    Richard

    Hi Morgan,
    Thanks for your response. Our intention is that to configure the total recall feature in the reporting instance for historical data view. Let's say , we are mining the changed data from the OLTP instance to reporting instance via logical standby. In this case, the history data will be maintained in the reporting instance?
    Regards,
    Richard

  • Can I add multple tables to a single Flashback Data archive

    Hi ,
    I want to add mutilple tables of a schema to a single Flashback Data archive . Is this possible ?
    I dont want to create multiple Flash back archives for each table
    Also can i add an entire schema or a tablespace to a flash back archive created ?
    Thanks,
    Sachin K

    Do adding multiple tables to a single flashback archive feasible in terms of space ?Yes, you can use. Multiple tables can share the same policies for data retention and purging. Moreover, since an FBDA consists of one or more tablespaces (or subparts thereof), multiple FBDAs can be constructed, each with a different but specific retention period. This means it’s possible to construct FBDAs that support different retention needs. Here are a few common examples:
    90 days for common short-term historical inquiries
    One full year for common longer-term historical inquiries
    Seven years for U.S. Federal tax purposes
    20 years for legal purposes
    How is the space for archiving data to a flashback archive generally estimated ?[url http://docs.oracle.com/cd/E18283_01/server.112/e16508/process.htm#BABGGHEI]Flashback Data Archiver Process (FBDA)
    FBDA automatically manages the flashback data archive for space, organization, and retention. Additionally, the process keeps track of how far the archiving of tracked transactions has occurred.
    It (FBDA) wakes up every 5 minutes by default (*1), checks Tablespace quotas every hour, creates history tables only when it has to and rests otherwise if not called to work. The process adjusts the wakeup interval dynamically based on the workload.
    (*1) *1 Contradicting observations to MOS Note 1302393.1 were made. The note says that FBDA wakes up every 6 seconds as of Oracle 11.2.0.2, but a trace on 11.2.0.3 showed:
    WAIT #140555800502008: nam='fbar timer' ela= 300004936 sleep time=300 failed=0 p3=0 obj#=-1 tim=1334308846055308
    5.1 Space management
    To ease the space management for FDA we recommend following these rules:
    - Create an archive for different retention periods
    o to optimally use space and take advantage of automatic purging when retention expires
    - Group tables with same retention policy (share FDAs)
    o to reduce maintenance effort of multiple FDAs with same characteristics
    - Dedicate Tablespaces to one archive and do not set quotas
    o to reduce maintenance/monitoring effort, quotas are only checked every hour by FBDA
    http://www.trivadis.com/uploads/tx_cabagdownloadarea/Flashback-Data-Archives-Rechecked-v1.4_final.pdf
    Regards
    Girish Sharma

  • How to get internal table from SAP Data Provider C#

    Hello.
    ABAP:
       DATA: lt_t001 TYPE TABLE OF t001.
       DATA: url(1000) TYPE c.
      SELECT * INTO TABLE lt_t001 FROM t001.
      CALL FUNCTION 'DP_CREATE_URL'
        EXPORTING
          type                 = 'APPLICATION'
          subtype           = 'X-R3TABLE'
        TABLES
          data                 = lt_t001
        CHANGING
          url                    = url
        EXCEPTIONS
          OTHERS           = 4.
    C#:
    using SAPDataProvider;
    using SAPTableFactoryCtrl;
    public void SetDataFromUrl(string url)
                SAPDataProviderClass p = new SAPDataProviderClass();
                p.SetDataFromURL("APPLICATION", "X-R3TABLE", url);
                ISapDPR3Table tbl = p.GetDataAsR3Table("APPLICATION", "X-R3TABLE");
                SAPTableFactoryClass tf = new SAPTableFactoryClass();
                Table tb = (Table)tf.NewTable();
                tb.ISAPrfcITab = tbl.DataTable; // Exception !!!!!!
    How to get internal table from SAP Data Provider ?

    Hi Sergey,
    I'm trying to do the same, have you found a solution to solved it?
    thanks for your help.
    Regards.
    Jonathan

  • Union tables from multiply data srouces in OBIEE

    Hi Pros,
    Recenetly I am doing a POC about union the same structure table from multiply data sources through OBIEE, uptill now I still can't make that out, I have tried below ways :
    Precondition: I have configured those two tables under one physical folder with two connection pool
    1. When do the union in physical level, can not implement since either can not create a view union two tables or using SQL.
    2. When do the union in Logical level, can not implement also
    I know that this kind of data integration should be done before we import into OBIEE model, but there is some policy that we can't integrate, so I just wondering whether OBIEE has this functionlity that can union data from different data source. Please advise.
    thanks

    Hi Srinivas
    After a second thought, I am wondering what you mean in the post, but the situation is that there are two tables in different data sources, Table-A and Table-B in the Physical layer, they are not yet integrated into one table(let's say Table-C), you mean add a data source indictor column into a table in BMM layer, I want to know how can we integrate data (UNION), because from my perspective, if TABLE-A have bleow column ID,NAME,ADDRESS, while the TABLE-B have the same, the difference is the data bwtween them, even we create a logicl table in BMM, drag those those columns, we only get ID,NAME,ADDRESS,ID#,NAME#,ADDRESS#, but the table structure is different.
    The problem is how to create a logic table like TABLE-D (ID,NAME,ADDRESS) which have the data both from TABLE-A and TABLE-B in Physical Layer. While no need to filter by prompt about the data sources. We need to show the data altoghter.
    Please advise if I missing you point.
    thanks
    Edited by: Henry He on Dec 28, 2010 3:21 PM

  • Recovering table from Flashback getting error

    Recovering table from Flashback getting error what is the reson
    1* FLASHBACK TABLE "BIN$e7TIAEKgSYO2WJ0L+hNuTA==$0" TO BEFORE DROP
    SQL> /
    FLASHBACK TABLE "BIN$e7TIAEKgSYO2WJ0L+hNuTA==$0" TO BEFORE DROP
    ERROR at line 1:
    ORA-38312: original name is used by an existing object

    This droped object original name is now taken by some other object in database so u can give this cmd like
    FLASHBACK TABLE TEST TO BEFORE DROP RENAME TO TEST2;
    it recover the table test and give the name as test2 because test table is already exist and that is created while this table is in recyclebin.

  • HELP ME! Creating a table from a data file

    Hi
    I'm writing an application for data visualization. The user can press the "open file" button and a FileChooser window will come up where the user can select any data file. I would like to take that data file and display it as a table with rows and columns. The user needs to be able to select the coliumns to create a graph. I have tried many ways to create a table, but nothing seems to work! Can anyone help me?! I just want to read from the data file and create a spreadsheet type table... I won't know how many rows and columns I'll need in advance, so the table needs to be dynamic!
    If you have ANY tips, I'd REALLY appreciated.....

    I won't know how many rows and columns I'll need in advance, so the table needs to be dynamic!You may use a List (ArrayList, LinkedList or Vector) for that.
    Lists allow you (dynamically) add elements.

  • Create table from external data with dates

    I have a CSV that looks somewhat like this:
    abcuser,12345,5/12/2012,5,250.55
    xyzuser,67890,5/1/2012,1,50
    ghjuser,52523,1/1/1900,0,0
    When I create the external table, then query it I get a date error:
    CREATE TABLE xtern_ipay
    userid VARCHAR2(50),
    acctnbr NUMBER(20, 0),
    datelastused DATE,
    number_rtxns NUMBER(12, 0),
    amtused NUMBER(12, 0)
    organization external ( TYPE oracle_loader DEFAULT directory "XTERN_DATA_DIR"
    ACCESS parameters (
    records delimited BY newline fields terminated BY "," )
    location ('SubscriberStatistics.csv') ) reject limit UNLIMITED;
    Error I see in the reject log:
    Field Definitions for table XTERN_IPAY
    Record format DELIMITED BY NEWLINE
    Data in file has same endianness as the platform
    Rows with all null fields are accepted
    Fields in Data Source:
    USERID CHAR (255)
    Terminated by ","
    Trim whitespace same as SQL Loader
    ACCTNBR CHAR (255)
    Terminated by ","
    Trim whitespace same as SQL Loader
    DATELASTUSED CHAR (255)
    Terminated by ","
    Trim whitespace same as SQL Loader
    NUMBER_RTXNS CHAR (255)
    Terminated by ","
    Trim whitespace same as SQL Loader
    AMTUSED CHAR (255)
    Terminated by ","
    Trim whitespace same as SQL Loader
    error processing column DATELASTUSED in row 1 for datafile g:\externaltables\SubscriberStatistics.csv
    ORA-01858: a non-numeric character was found where a numeric was expected
    error processing column DATELASTUSED in row 2 for datafile g:\externaltables\SubscriberStatistics.csv
    ORA-01858: a non-numeric character was found where a numeric was expected
    error processing column DATELASTUSED in row 3 for datafile g:\externaltables\SubscriberStatistics.csv
    ORA-01858: a non-numeric character was found where a numeric was expected
    Any ideas on this? I know I need to tell oracle the format of the date on the external file, but I can't figure it out.

    Try this:
    CREATE TABLE xtern_ipay
       userid         VARCHAR2 (50)
    , acctnbr        NUMBER (20, 0)
    , datelastused   DATE
    , number_rtxns   NUMBER (12, 0)
    , amtused        NUMBER (12, 2)
    ORGANIZATION EXTERNAL
        ( TYPE oracle_loader DEFAULT DIRECTORY "XTERN_DATA_DIR"
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL
    (   userid
      , acctnbr
      , datelastused DATE 'mm/dd/yyyy'
      , number_rtxns
      , amtused)
    location ('SubscriberStatistics.csv') ) reject LIMIT unlimited;
    select * from xtern_ipay;
    USERID                                                ACCTNBR DATELASTU NUMBER_RTXNS    AMTUSED
    abcuser                                                 12345 12-MAY-12            5     250.55
    xyzuser                                                 67890 01-MAY-12            1         50
    ghjuser                                                 52523 01-JAN-00            0          0
    {code}
    Sorry I had to correct the previous statement again for the date format and for the column amtused that was defined without decimals.
    Regards
    Al
    Edited by: Alberto Faenza on May 31, 2012 6:34 PM
    wrong date format mm/dd/yy instead of dd/mm/yy
    Edited by: Alberto Faenza on May 31, 2012 6:40 PM
    Fixed again the date format and the amtused defined with 2 decimals                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Problem accessing sys table from user's schema

    Hi,
    I had a store procedure whose owner is say user1.Inside of that store procedure i am executing the following select query.
    select * from sys.all_coloumns where owner = 'USER1';
    This is resulting me "no data found".
    When i am executing the same select query from SQL*plus it is giving me the proper output.
    I think this is happening because of less systeme privileges/role assigned to schema USER1.
    Can you please suggest which priviledges i need to grant, so that my proc. will work fine.

    User1 needs select rights on sys.all_columns directly.
    Not via the DBA role.

  • How to find tables from extracted data using lo cockpit

    I have extracted data form 2lis_02_scl using Lo cockpit and replicated in sap bw.
    Now i want to know from which tables the data was extracted .pls giv info.
    Thanks in advance.
    Regards,
    Hari Reddy

    One quick way - search Help.
    Tables are EKKO, EKPO, EKET , EKPA
    http://help.sap.com/saphelp_nw70/helpdata/en/8d/bc383fe58d5900e10000000a114084/content.htm

  • Create Table from XML Data

    XML info: D:\XMLDATA\mytable.xml
    create directory XML_dir as 'D:\XMLDATA';
    <INFO>
    <Column_Name>Col1</Column_Name>
    <Data_Type>Char(1)</Data_Type>
    </INFO>
    <INFO>
    <Column_Name>Col2</Column_Name>
    <Data_Type>VARCHAR2(50)</Data_Type>
    </INFO>
    I need to create a table based on the XML data.
    Create Table mytable
    Col1 Char(1),
    Col2 Varchar2(50)
    How to read and execute the xml data to create a table.
    Thanks!    

    Something like this :
    SQL> declare
      2 
      3    v_xmlinfo      clob;
      4    v_ddl          varchar2(32767) := 'CREATE TABLE mytable ( #COLUMN_LIST# )';
      5    v_column_list  varchar2(4000);
      6 
      7  begin
      8 
      9    v_xmlinfo := dbms_xslprocessor.read2clob('TEST_DIR', 'info.xml');
    10 
    11    select column_list
    12    into v_column_list
    13    from xmltable(
    14           'string-join(
    15              for $i in /INFO
    16              return concat($i/Column_Name, " ", $i/Data_Type)
    17            , ", "
    18            )'
    19           passing xmlparse(content v_xmlinfo)
    20           columns column_list varchar2(4000) path '.'
    21         ) ;
    22 
    23 
    24    v_ddl := replace(v_ddl, '#COLUMN_LIST#', v_column_list);
    25    --dbms_output.put_line(v_ddl);
    26 
    27    execute immediate v_ddl;
    28 
    29  end;
    30  /
    PL/SQL procedure successfully completed
    SQL> select * from mytable;
    COL1 COL2

  • Excel Inplace - Update SAP R/3 tables from Excel Data

    Hi,
    I developed a report displaying using Excel inplace.  After user finished editing the Excel workingsheet, I called method get_ranges_data of interface I_OI_SPREADSHEET to get data contents.  But this method or almost all the methods of this interface requires user hit a enter key after edit a cell.  Without hit a enter key or click different cell, this method will not be able to get any data contents. 
    Is there a way I can force a user click or enter event?
    An example is apprieciated.

    I have not tested this, but you might try the following.
    The inline excel should be created from a template. And in the excel VB code should exists which call a customer event in Workbook_SheetChange. This code is executed each time a change occurs on the worksheet.
    so in you r worksheet you should have the following code:
    Private Sub Workbook_SheetChange(ByVal Sh As Object, _
            ByVal Source As Range)
        ' runs when a sheet is changed
      Call ThisWorkbook.Container.SendCustomEvent("DATA_CHANGE")
    End Sub
    this event you can catch in SAP, and reread the data from excel.
    so now each change in excel triggers an event in SAP.
    Not sure it works, but worth a try.
    Joris.

  • Update Table from view data

    I want o update the Table using the view of a table of different schema. A dblink is created between two schema's ?
    Kindly suggest the method to implement this ?
    What Form type should i use?
    Its is possible to use collections?
    Sanjay

    user12957777 wrote:
    I want o update the Table using the view of a table of different schema. A dblink is created between two schema's ?
    Kindly suggest the method to implement this ?
    What Form type should i use?
    What do you mean by "connection"?
    If you use a single database then just add schema's preffix and that's all.
    under table owner:
    insert into <table_name> select * from <schema>.<view>under view owner:
    insert into <schema>.<table_name> select * from <view>But make sure your users get necessary grants.

  • Data archival and purging for OLTP database

    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    vara

    user11261773 wrote:
    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    varaFBDA is the better option option .Check the below link :
    http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive
    Good luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Importing internal table from one program to another program

    Hi everybody,
    i have one small doubt.
    i am using submit statement and passing the values from this program to another program selection screen. in that program logic is written.In that program one internal table values are being exported to the memory id of that program. now i have to import that internal table values into my program by using import statement. i am using the following syntax
    import itab from menory id 'program name'.
    but i am getting an error saying program name is unknown.
    what is the exat syntax for this .
    thanking you,
    giri.

    hi,
    check these statements.
    IMPORT - Get data
    Variants:
    1. IMPORT obj1 ... objn FROM DATA BUFFER f.
    2. IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    2. IMPORT obj1 ... objn FROM MEMORY.
    3. IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    4. IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    5. IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    6. IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    7. IMPORT obj1 ... objn FROM LOGFILE ID key.
    8. IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    9. IMPORT (itab) FROM ... .
    In some cases, the syntax rules that apply to Unicode programs are different than those for non-Unicode programs. For more details, see Storing Cluster Tables.
    Variant 1
    IMPORT obj1 ... objn FROM DATA BUFFER f.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters.
    Effect
    Imports the data objects obj1 ... objn from the data buffer declared. The data buffer must be of type XSTRING . The data objects obj1 ... objn can be fields, structures, complex structures, or tables. The system imports all the data that has been stored in the data buffer f using the EXPORT ... TO DATA BUFFER statement and is listed here. It also checks that the structure used in the IMPORT statement matches the one in the EXPORT statement.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. The contents of all the objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end
    of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is
    relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a
    runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular
    character cannot be converted when the character set is converted.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    Not all character-type fields are converted. To convert
    a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters.
    Moreover, you cannot use this addition in conjunction with the additions 3, 4, 5, 6, or 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the
    character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 2
    IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See No implicit field names in cluster.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from the specified internal table itab. The first column in the internal table must be of the predefined type INT2 and the second must be type X. To define the first column you must refer to a data element in the ABAP Dictionary that has the predefined type INT2.
    All data that was stored in the internal table itab using EXPORT ... TO INTERNAL TABLE and listed, is imported. The system checks that the EXPORT and IMPORT structures match.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the specified data cluster were imported, the rest remain unchanged (it is possible that no data object was imported).
    SY-SUBRC = 4:
    The data objects could not be imported.
    The contents of all listed objects remain unchanged
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    Places the object in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to add new fields to the ends
    of structures, even to substructures and internal tables (the additional fields are filled with initial value during the IMPORT). It also allows you to increase the size of existing fields (C, N, X, P, I1, and I2) and to map Char fields to STRING type fields or byte fields to XSTRING type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    field or omit the last component on the highest level (till Release 4.6 this was possible without specifying an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the page order is
    relevant, that is any substructures match. With this addition, the system also ignores alignment changes arising from the Unicode conversion (for example, due to subsequent insertion of named includes).
    This addition rules out any subsequent structural enhancements (addition 3) or structural shortening (addition 4) because with this addition it is the structural limits and include limits that are to be ignored.
    As from Release 6.10, the include information will also be stored in the dataset, so that it is possible to also check whether the includes match, that is substructures and includes (named or unnamed) are treated the same. When importing data that was exported in a Release lower than 6.10, the includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition has the effect that an error in the
    character set conversion does not cause a runtime error. The system uses "#" as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The system uses the specified replacement character if a
    character cannot be converted during a character set conversion. If this addition is not specified, the system uses "#" as a replacement character.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    No character type fields are converted. For this you
    must create a field or structure that is identical to the exported field or exported structure, except that all character type fields must be replaced with hexadecimal fields.
    This addition, which is only allowed in programs with a set Unicode flag, allows you to import binary data disguised as single byte characters. This addition cannot be used in conjunction with additions 3, 4, 5, 6, and 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The codepage of the exported data is stored in the
    character-type field f1 (for example, to be able to analyze the data imported with the addition IN CHAR-TO-HEX MODE).
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 (for example, to be able analyze the data imported using the addition IN CHAR-TO-HEX MODE). The field f2 must be of type ABAP_ENDIAN, defined in type group ABAP. You must therefore include the type group ABAP in the ABAP program with a TYPE-POOLS statement.
    Variant 3
    IMPORT obj1 ... objn FROM MEMORY.
    Extras:
    1. ... = f (for each object to be imported) 2. ... TO f (for each object to be imported)
    3. ... ID key
    4. ... ACCEPTING PADDING
    5. ... ACCEPTING TRUNCATION
    6. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See You Must Enter Identification and Cannot Use Implicit Field Names inClusters
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures or tables) from a data cluster in the ABAP memory (see EXPORT). Reads in all data without an ID that was exported to memory with "EXPORT ... TO MEMORY.". In contrast to the variant IMPORT FROM DATABASE, it does not check that the structure matches in EXPORT and IMPORT.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because the ABAP memory was empty.
    The contents of all objects remain unchanged.
    Note
    You should always use the addition 3 (... ID key) with the statement. Otherwise, the effect of the variant is not certain (EXPORT statements in different parts of a program overwrite each other in the ABAP memory), since it exists only for reasons of compatibility with R/2.
    Additional methods for selecting and deleting data clusters in the ABAP memory are provided by the system class CL_ABAP_EXPIMP_MEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... ID key
    Effect
    Imports only data stored in ABAP memory under the ID key.
    Notes
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Addition 4
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 5
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR field, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 6
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO MEMORY, DELETE FROM MEMORY, FREE MEMORY
    Variant 4
    IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (after itab(ar) or ID key )
    5. ... ACCEPTING PADDING
    6. ... ACCEPTING TRUNCATION
    7. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters and You Cannot Use Table Work Areas.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from shared memory. The data objects are read using the ID key from the area ar in the table itab - c.f. EXPORT TO SHARED MEMORY). You must use itab to specify a database table although the system reads from a memory table with the appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. You may have used the wrong ID. The contents of all the objects remain unchanged.
    Notes
    The table dbtab named according to SHARED MEMORY must be declared using TABLES (except in addition 2).
    The structure of fields (field symbols and internal tables) to be imported must match the structure of the objects exported in the dataset. The objects must be imported under the same names as those under which they were exported. Otherwise, they will not be imported.
    The key length consists of: the client (3 digits, but only if tab is client-specific); area (2 characters); ID; and line number (4 bytes). It must not exceed 64 bytes - that is, the ID must not be longer than 55 characters, if the table is client- specific.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the shared memory are provided by the system class CL_ABAP_EXPIMP_SHMEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... CLIENT g (before ID key)
    Effect
    The data is imported from client g (provided the import/export table is tab client-specific). The client, g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (after itab(ar) or ID key)
    Effect
    You need to use this addition if user data fields have been stored in the application buffer and are to be read from there. The work area wa is used instead of the table work area. The target area must correspond to the structure of the called table tab.
    Addition 5
    ... ACCEPTING PADDING
    Effect
    This addition allows you to: append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 6
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 7
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 4 (enlarge structure) or addition 5 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO SHARED MEMORY, DELETE FROM SHARED MEMORY
    Variant 5
    IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (last addition or after itab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See Cannot Use Implicit Fieldnames in Clusters und Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields or
    tables) from the cross-transaction application buffer. The data objects are read in the application buffer using the ID key of the area ar of the buffer area for the table itab (see EXPORT TO SHARED BUFFER). You must use dbtab to specify a database table although the system reads from a memory table with an appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this means that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table from the application buffer with the structure INDX:
    TYPES: BEGIN OF ITAB3_LINE,
             CONT(4),
           END OF ITAB3_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'KEYVALUE',
          F1(4),
          F2(8) TYPE P DECIMALS 0,
          ITAB3 TYPE STANDARD TABLE OF ITAB3_LINE,
          INDX_WA TYPE INDX.
    Import data.
    IMPORT F1 = F1 F2 = F2 ITAB3 = ITAB3
           FROM SHARED BUFFER INDX(ST) ID INDXKEY TO INDX_WA.
    After import, the data fields INDX-AEDAT and
    INDX-USERA in front of CLUSTR are filled with
    the values in the fields before the EXPORT
    statement.
    Notes
    You must declare the table dbtab, named after DATABASE using a TABLES statement.
    The structure of the fields, structures, and internal tables to be imported must match the structure of the objects exported to the dataset. Moreover, the objects must be imported with the same name used to export them. Otherwise, the import is not performed.
    The maximum total key length is 64 bytes. It must include: a client if the table is client-specific (3 characters); an area (2 characters); identification; and line counter (4 bytes). This means that the number of characters available for the identification of a client-specific table is 55 characters.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the cross-transaction application buffer are provided by the system class CL_ABAP_EXPIMP_SHBUF.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in the field f
    Addition 3
    ... CLIENT g (after dbtab(ar))
    Effect
    Takes the data from the client g (if the import/export table dbtab is client-specific). The client g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (as the last addition or after itab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the application buffer and then read them from there later. The system uses a work area wa instead of a table work area. The target area must have the same structure as the table tab.
    Example
    DATA: INDX_WA TYPE INDX,
          F1.
    IMPORT F1 = F1 FROM SHARED BUFFER INDX(AR)
                   CLIENT '001' ID 'TEST'
                   TO INDX_WA.
    WRITE: / 'AEDAT:', INDX_WA-AEDAT,
           / 'USERA:', INDX_WA-USERA,
           / 'PGMID:', INDX_WA-PGMID.
    Variant 6
    IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... CLIENT g (before ID key )
    4. ... USING form
    5. ... TO wa (last addition or after dbtab(ar))
    6. ... MAJOR-ID id1 (instead of ID key)
    7. ... MINOR-ID id2 (with MAJOR-ID id1 )
    8. ... ACCEPTING PADDING
    9. ... ACCEPTING TRUNCATION
    10. ... IGNORING STRUCTURE BOUNDARIES
    11. ... IGNORING CONVERSION ERRORS
    12. ... REPLACEMENT CHARACTER c
    13. ... IN CHAR-TO-HEX MODE
    14. ... CODE PAGE INTO f1
    15. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Fieldnames in Clusters and Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures, or tables) from the data cluster with ID key in area ar of the database table dbtab (see EXPORT TO DATABASE).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that not data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table:
    TYPES: BEGIN OF TAB3_TYPE,
              CONT(4),
           END OF TAB3_TYPE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4), F2 TYPE P,
          TAB3 TYPE STANDARD TABLE OF TAB3_TYPE WITH
                    NON-UNIQUE DEFAULT KEY,
          WA_INDX TYPE INDX.
    INDXKEY = 'INDXKEY'.
    IMPORT F1   = F1
           F2   = F2
           TAB3 = TAB3 FROM DATABASE INDX(ST) ID INDXKEY
           TO WA_INDX.
    Notes
    You must declare the table dbtab, named after DATABASE, using the TABLES statement (except in addition 5).
    The structure of fields, field strings and internal tables to be imported must match the structure of the objects exported to the dataset. In addition, the objects must be imported under the same name used to export them. If this is not the case, either a runtime error occurs or no import takes place.
    Exception: You can lengthen or shorten the last field if it is of type CHAR, or add/omit CHAR fields at the end of the structure.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the database table specified are provided by the system class CL_ABAP_EXPIMP_DB.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... CLIENT g (before the ID key)
    Effect
    Data is taken from the client g (in client-specific import/export databases only). Client g must be a character-type data object (but not a string).
    Example
    DATA: F1,
          WA_INDX TYPE INDX.
    IMPORT F1 = F1 FROM DATABASE INDX(AR) CLIENT '002' ID 'TEST'
                   TO WA_INDX.
    Addition 4
    ... USING form
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Effect
    Does not read the data from the database. Instead, calls the FORM routine form for each record read from the database without this addition. This routine can take the data key of the data to be retrieved from the database table work area and write the retrieved data to this work area. The name of the routine has the format <name of database table>_<name of form>; it has one parameter which describes the operation (READ, UPDATE or INSERT). The routine must set the field SY-SUBRC in order to show whether the function was successfully performed.
    Addition 5
    ... TO wa (after key or after dbtab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the cluster database and then read from there. The system uses the work area wa instead of a table work area. The target area entered must have the same structure as the table dbtab.
    Example
    DATA WA LIKE INDX.
    DATA F1.
    IMPORT F1 = F1 FROM DATABASE INDX(AR)
                   CLIENT '002' ID 'TEST'
                   TO WA.
    WRITE: / 'AEDAT:', WA-AEDAT,
           / 'USERA:', WA-USERA,
           / 'PGMID:', WA-PGMID.
    Addition 6
    ... MAJOR-ID id1 (instead of the ID key).
    Addition 7
    ... MINOR-ID id2 (with MAJOR-ID id1)
    This addition is not allowed in an ABAP Objects context. See Cannot Use Generic Identification.
    Effect
    Searches for a record the first part of whose ID (length of id1) matches id1 and whose second part - if MINOR-ID id2 is also declared - is greater than or equal to id2.
    Addition 8
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 9
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 10
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 8 (enlarge structure) or addition 9 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 11
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 12
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular character cannot be converted when the character set is converted. If you do not use this addition, '#' is used as a replacement character.
    This addition can only be used in conjunction with addition 11.
    Addition 13
    ... IN CHAR-TO-HEX MODE
    Effect
    All character-type fields are not converted. To convert a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters. Moreover, you cannot use this addition in conjunction with the additions 8, 9, 10, 11, and 12.
    Addition 14
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 15
    ... ENDIAN INTO f2
    Effect
    The byte order(LITTLE or BIG) of the exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 7
    IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    This variant is not allowed in an ABAP Objects context. See Cannot Use Clusters in Files
    Note
    This variant is no longer supported and cannot be used.
    Variant 8
    IMPORT obj1 ... objn FROM LOGFILE ID key.
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Extras:
    1. ... = f (for each field f to be imported) 2. ... TO f (for each field f to be imported)
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Field Names in Clusters
    Effect
    Imports data objects (fields, field strings or internal tables) from the update data. You must specify the update key assigned by the system (with current request number) as the key.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. An incorrect ID may have been used.
    The contents of all objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Variant 9
    IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... CLIENT g (after dbtab(ar)) 2. ... TO wa (last addition or after dbtab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Table Work Areas.
    Effect
    Imports an object directory stored under the specified ID with EXPORT TO DATABASE into the table itab. The internal table itab may not have the type HASHED TABLE or ANY TABLE.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The directory was successfully imported.
    SY-SUBRC = 4:
    The directory could not be imported, probably because an incorrect ID was used.
    The internal table itab must have the same structure as the Dictionary structure CDIR (INCLUDE STRUCTURE).
    Addition 1
    ... CLIENT g (before ID key)
    Effect
    Takes data from the client g (only with client-specific import/export databases). Client g must be a character-type data object (but not a string).
    Addition 2
    ... TO wa (last addition or after dbtab(ar))
    Effect
    Uses the work area wa instead of the table work area. When you use this addition, you do not need to declare the table dbtab, named after DATABASE using a TABLES statement. The work area entered must have the same structure as the table dbtab.
    Example
    Directory of a cluster consisting of two fields and an internal table:
    TYPES: BEGIN OF TAB3_LINE,
             CONT(4),
           END OF TAB3_LINE,
           BEGIN OF DIRTAB_LINE.
             INCLUDE STRUCTURE CDIR.
    TYPES  END OF DIRTAB_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4),
          F2(8)   TYPE P decimals 0,
          TAB3    TYPE STANDARD TABLE OF TAB3_LINE,
          DIRTAB  TYPE STANDARD TABLE OF DIRTAB_LINE,
          INDX_WA TYPE INDX.
    INDXKEY = 'INDXKEY'.
    EXPORT F1 = F1
           F2 = F2
           TAB3 = TAB3
           TO DATABASE INDX(ST) ID INDXKEY " TAB3 has 17 entries
           FROM INDX_WA.
    IMPORT DIRECTORY INTO DIRTAB FROM DATABASE INDX(ST) ID INDXKEY
           TO INDX_WA.
    Then, the table DIRTAB contains the following:
    NAME     OTYPE  FTYPE  TFILL  FLENG
    F1         F      C      0      4
    F2         F      P      0      8
    TAB3       T      C      17     4
    The meaning of the individual fields is as follows:
    NAME:
    Name of stored object
    OTYPE:
    Object type (F: Field, R: Field string / Dictionary struc

Maybe you are looking for

  • VL10a - Dialog vs. Background

    Hi All, I need input regarding this scenario with tcode VL10a. I have a Sales Order and I create delivery for it through VL10a. My SO has 3 line items. When I process the SO in Dialog mode, a delivery is created for all three line items. When I use t

  • SAP HR: Cluster Key greater than 40 characters

    Hi All, I have a requirement where I need to store data in cluster PCL4 under a new RELID. The key length for this cluster data is coming out to be greater than 40 characters whereas the length of key field SRTFD is 40. Is there any way I can store d

  • Computer freezes upon playing itunes with other programs open!!!

    When I run itunes with only a few other programs open, such as internet explorer, my computer completely freezes and will not even allow the ctrl/alt/delete shutdown. This problem also occurs when I have been running itunes by itself for a few songs.

  • Terminated employees, uncertain end dating

    I dont have much knowledge in HRMS. Could you please help me on my issue. In our test instance for one employee, we have two assignments one is 'Active Assignment' and other one is 'Terminate assignment'. Assignment effective start date effective end

  • Detect proxy settings and ports

    Hello forum, I have a project to find out which servers or applications/services are going through ISA 2006 proxy. The servers comprise of WIndows 2003 and 2008 R2 boxes. I tried proxcfg.exe (pre 2008) and netsh winhttp show proxy (post 2008) but I g