ABAP code for Hierarchy Loading from Flat File

Hi,
Can anyone give me some idea / ABAP code for generating parent - child relationships (NODEIDS) from a flat file and load into BW.
Best regards
Any insight into this development is highly appreciated

Hi,
also have a look at this how to to get informations about the file structure:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/0403a990-0201-0010-38b3-e1fc442848cb
/manfred

Similar Messages

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Cube creation & Data loading from Flat file

    Hi All,
    I am new to BI 7. Trying to create a cube and load data from a flat file.
    Successfully created the infosource and Cube (used infosource as a template for cube)
    But got stucked at that point.
    I need help on how to create transfer rules/update rules and then load data into it.
    Thanks,
    Praveen.

    Hi
    right click on infosource->additional functions->create transfer rules.
    now in the window insert the fields you want to load from flat file->activate it.
    now right click on the cube->additional functions->create update rules->activate it.
    click on the small arrow on the left and when you reach the last node(DS)
    right click on it->create info package->extenal data tab->give your FLAT file path and select csv format->schedule tab->click on start.
    hope it helps..
    cheers.

  • Tracking history while loading from flat file

    Dear Experts
    I have a scenario where in have to load data from flat file in regular intervals and also want to track the changes made to the data.
    the data will be like this and it keep on changes and the key is P1,A1,C1.
    DAY 1---P1,A1,C1,100,120,100
    DAY 2-- P1,A1,C1,125,123,190
    DAY 3-- P1, A1, C1, 134,111,135
    DAY 4-- P1,A1,C1,888,234,129
    I am planing to load data into an ODS and then to infocube for reporting what will be the result and how to track history like if i want to see what was the data on Day1 and also current data for P1,A1,C1.
    Just to mention as i am loading from flat file i am not mapping RECORDMODE in ODS from flat file..
    Thanks and regards
    Neel
    Message was edited by:
            Neel Kamal

    Hi
    You don't mention your BI release level, so I will assume you are on the current release, SAP NetWeaver BI 2004s.
    Consider loading to a write-optimized DataStore object, to strore the data.  That way, you automatically will have a unqiue technical key for each record and it will be kept for historical purposes.  In addition, load to a standard DataStore object which will track the changes in the change log.  Then, load the cube from the change log (will avoid your summarization concern), as the changes will be updates (after images) in the standard DataStore Object.
    Thanks for any points you choose to assign
    Best Regards -
    Ron Silberstein
    SAP

  • CUNIT error in data loading from flat file after r/3 extraction

    Hi all
    After R/3 business content extraction, if i load data from flat file to info cube, I am getting Conversion exit CUNIT error, what might be the reason, the data in the flat file 0unit column is accurate, and mapping rules are also correct, but still i am getting error with CUNIT.?

    check your unit if you are loading amount or quantities what mapping you have and what you are loading from flat files.
    BK

  • Vendor master data load from flat file

    Hi Experts,
    I am trying to load data from flat file. I am confused which info object i should use. My requirement is for info cube 0BBP_C01.
    As i found for this info cube only two info objects are available: 0BBP_VENDOR and 0VENDOR. But in my flat file i have following fields: Vendor code, Vendor Name, Ultimate Parent, City, Country, Minority Status, Vendor Payment Terms. which all are together not present in either of info objects.
    Please suggest which info object i must use. or should i create Z info object for my requirement?
    Thanks in advance.
    Regards,
    Niranjan Chechani

    Hi Kiran,
    I am loading data through flat file. I am not getting your point like how to get attributes present in the R/3 system which I am not using in this case.
    Hi rvc,
    I have created two characteristics z info objects i.e. ZU_PARENT for Ultimate parent and ZVEN_PAY for Vendor Payment Terms.
    When i am trying to add these two as attributes and add it as Navigational attribute, I am getting the error "Characteristic 0VENDOR: The attibutes SID table(s) could not be filled" (Message no. R7586)
    Please suggest am i wrong somewhere?
    Thanks and Regards,
    Niranjan Chechani
    Edited by: Niranjan Chechani on Nov 28, 2011 12:15 PM

  • Help Required regding: Validation on Data Loading from Flat File

    Hi Experts,
    I need u r help in the following issue.
    I need to validated the transactional data loading to the GL Cube from Flat file,
    1) The transactional data to the Cube to be loaded <b>only if master data</b> record exists for the <b>“0GL_ACCOUNT”</b> info object.
    2) If the master data record does not exits then the record need to be skipped from the loading and after the loading  the system should throw a message saying that these many records have been skipped (if there are any skipped records.).
    I would really appriciate u r help and suggestions on solving this issue.
    Regds
    Hari

    Hi, write a <b>start routine</b> in transfer rules like this.
      DATA: l_s_datapak_line type TRANSFER_STRUCTURE,
            l_s_errorlog TYPE rssm_s_errorlog_int,
            <b>l_s_glaccount type /BI0/PGLACCOUNT</b>,
            new_datapak type tab_transtru.
           refresh new_datapak.
           loop at datapak into l_s_datapak_line.
           select single * from /BI0/PGLACCOUNT into l_s_glaccount
             where CHRT_ACCTS eq l_s_datapak_line-<b>field name in transfer structure/datsource for CHRT_ACCTS</b>
    and GL_ACCOUNT eq l_s_datapak_line-<b>field name in transfer structure/datsource for GL_ACCOUNT</b>
    and OBJVERS eq 'A'.
           if sy-subrc eq 0.
             append l_s_datapak_line to new_datapak.
           endif.
           endloop.
           datapak = new_datapak.
           if datapak[] is initial.
    abort <> 0 means skip whole data package !!!
             ABORT = 4.
           else.
             ABORT = 0.
           endif.
    i have already some modifications but U can slightly change it to suit your need.
    regards
    Emil

  • Loading from flat file to dso using process chains

    hi,
    i am using BI7.0
    i am new to  process chains
    can anyone  explain how to load data from flat file to dso using process chains(i have created all the objects created) preffered if explained with an example

    You can find a lot info if you can searh SDN.
    Metachain
    Steps for Metachain :
    1. Start ( In this variant set ur schedule times for this metachain )
    2.Local Process Chain 1 ( Say its a master data process chain - Get into the start variant of this chain ( Sub chain - like any other chain ) and check the second radio button " Start using metachain or API " )
    3.Local Process Chain 2 ( Say its a transaction data process chain do the same as in step 2 )
    Steps for Process Chains in BI 7.0 for a Cube.
    1. Start
    2. Execute Infopackage
    3. Delete Indexes for Cube
    4.Execute DTP
    5. Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Execute DTP
    5. Activate DSO
    For an IO
    1. Start
    2.Execute infopackage
    3.Execute DTP
    4.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage ( loads till psa )
    3.Execute DTP ( to load DSO frm PSA )
    4.Activate DSO
    5.Delete Indexes for Cube
    6.Execute DTP ( to load Cube frm DSO )
    7.Create Indexes for Cube
    3.X
    Master loading ( Attr, Text, Hierarchies )
    Steps :
    1.Start
    2. Execute Infopackage ( say if you are loading 2 IO's just have them all parallel )
    3.You might want to load in seq - Attributes - Texts - Hierarchies
    4.And ( Connecting all Infopackages )
    5.Attribute Change Run ( add all relevant IO's ).
    Start
    Infopackge1A(Attr)|Infopackge2A(Attr)
    Infopackge1B(Txts)|Infopackge2B(Txts)
    /_____________________|
    Infopackge1C(Txts)______|
    \_____________________|
    ___________________|
    __\___________________|
    ___\__________________|
    ______ And Processer_ ( Connect Infopackge1C & Infopackge2B )
    __________|__________
    Attribute Change Run ( Add Infobject 1 & Infoobject 2 to this variant )
    1. Start
    2. Delete Indexes for Cube
    3. Execute Infopackage
    4.Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Activate DSO
    For an IO
    1.Start
    2.Execute infopackage
    3.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage
    3.Activate DSO
    4.Delete Indexes for Cube
    5.Execute Infopackage
    6.Create Indexes for Cube
    Some Links
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c

  • Data is not updated to Infocube (BI 7.0) when i load from flat file

    Hi Experts,
                   when I try to load data from flat file to Infocube (BI 7.0) with full or Delta, I am gettin following errors.
    1) Error while updating to data target,  message No: RSBK241
    2) Processed with Errors,  Message No: RSBK 257.
    Till transformations everthing is successful but updating data to cube is not successful...
    what is the message no. indicates? can i find solution anywhere with respective to Message NO.?
    Anybody please help me..
    Regards
    Anil

    Hi Bhaskar,
                       There is no problem with cube activation.But i try to activate the cube by using
    the function module " RSDG_CUBE_ACTIVATE". It goes to short dump.Anyhow i want to know
    that in what scenario we use this function module to activate the cube.
             There is no special characters in the file. I delete the requests in psa, cube and Re run the
    Infopackage & DTP.But I am getting same error.
    The following errors are found in updating menu
    of details tab (display messages option)
    1) Error while updating to data target, message No: RSBK241
    2) Processed with Errors, Message No: RSBK 257.
                     Thank you
    Regards
    Anil
    Edited by: anilkumar.k on Aug 20, 2010 8:35 PM

  • Data Source creation for Master Data from Flat File to BW

    Hi,
    I need to upload Master Data from Flat File. Can anybody tell step by step right from begining of creation of DataSource up to Loading into Master Data InfoObject.
    can any body have Document.
    Regards,
    Chakri.

    Hi,
    This is the procedure.
    1. Create m-data with or without attributes.
    2. Create infosource .
        a) with flexible update
             or
        b) with direct update
    3. Create transfer rules and assign tyhe names of m-data and attribute in "Transfer rules" tab and transfer them to communication structure.
    4. Create the flat-file with same stucture as communication structure.
    5. if chosen direct update then create infopackage and assign the name of flat-file and schedule it.
    6. if chosen flexible update the create update rule with assigning name of the infosource and the schedule it by creating infopackage.
    Hope this helps. If still have prob then let me know.
    Follow this link also.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm
    Assign points if helpful.
    Vinod.

  • Wrong century for the dates from flat file

    I am in the process of uploading the data from flat file ( aka CSV) into oracle via external table .
    All the dates have the year of 20xx. My understanding was 51-99 will be prefixed with 19 ( such as 1951 - 1999) and 00-50
    will be prefixed with 20 ( such as 2004 ... )
    The column below ( startdate ) is of timestamp .
    Is my understanding wrong ?
    SQL> select startdate , to_date(startdate , 'dd/mm/yyyy' ) newdate from tab;
    NEW_DATE STARTDATE
    09/18/2090 18-SEP-90 12.00.00.000000000 AM
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for 64-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    SQL> show parameter nls
    NAME TYPE VALUE
    nls_calendar string
    nls_comp string BINARY
    nls_currency string
    nls_date_format string
    nls_date_language string
    nls_dual_currency string
    nls_iso_currency string
    nls_language string AMERICAN
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_numeric_characters string
    nls_sort string
    nls_territory string AMERICA
    nls_time_format string
    nls_time_tz_format string
    nls_timestamp_format string
    nls_timestamp_tz_format string
    SQL>
    SQL>

    Offense . None taken ...
    I literally typed the sql ( as I did not copy & paste SQL / resultsets ) . I did use to_char in my original sql . I don't want to expose the real table . thats why I was giving the made up test case. If you carefully the SQL and the equivalent result sets ... you would have noticed .
    Here is the SQL used ( of course , I have changed the table name ) ...
    select to_char(startdate , 'mm/dd/yyyy') sdate, startdate from t
    SDATE STARTDATE
    09/18/2090 18-SEP-90 12.00.00.000000 AM
    The flatfile had the value of 091890 as the data .

  • Data loading from flat file into bw

    Hi Experts,
    When we load data from flat file into BW the <b>sequence of fields in flat file</b> should be matched with the <b>transfer structure</b> or <b>communication structure</b>?
    wid thanX,
    arijit

    Arijit,
    The sequence of fields in a flat file must be the same as this one in the transfer structure.
    The comm and tran structures are the same after the first creation of datasource from flat file source system. After that someone could change the sequence in the CS -- if the mapping of fields in transfer rules remains unchanged, then the load is to be succesful.
    In case of changing the sequence in TS you'll get either error or wrong load (values to be loaded into some infobjects will go to another ones.)
    Best regards,
    Eugene

  • Regarding master data ,transactional data loading from flat file

    Hi friends,
    Please tell me how to load master data and transactional data from flat file ....
    Thanks in advance ,
    Regards,
    ramnaresh.

    Hi,
    Please use the 'search forum' functionality and search the BI Forum with say 'flat file loading'.  You would get plenty of links of previous threads.
    BR/
    Mathew.

  • Settings for loading from FLat File in BI7

    Hi guys
    i am trying to load data from a csv file. i used MS excel to create this cvs file. now in BI-7 what settings do i need to use to load the data. i need info like "Seperator"  and "Replacement for Blank".
    i have already tried  ; and , for seperator, but the preview is all messed up.
    thanks

    Hi Adnan,
    First you create a source system for flat file loading.
    Do this in RSA1 - Source Systems. Via context menu, create it in node "File".
    Then, goto the RSA1 - Datasources view. Choose the corrrect source system.
    Create a datasource and assign fields.
    Use your datasource to load data into infoobject, datastore using a transformation between the datasource and datastore.
    Create and schedule an infopackage to load to PSA.
    Create and schedule a DTP to load into target.
    Setting for flat file loading
    in SPRO
    Thousand separator
    dec.pointer separator
    field separator
    Field delimiter
    In info package
    External Data
    Here we can give the file path & where the file exist(wether in appllication server or client work station
    file type
    data separator
    esacpesign
    thousand separator
    how many to be igonered
    hope this will help you...............
    for more info
    Pls refer to the link below
    [http://help.sap.com/saphelp_nw04s/helpdata/en/43/03450525ee517be10000000a1553f6/frameset.htm ]
    [http://help.sap.com/saphelp_nw04s/helpdata/en/fc/1251421705be30e10000000a155106/content.htm]
    [b.i 7.0 ffile extraction;
    [http://help.sap.com/saphelp_nw04/helpdata/en/8e/dbe92341c84242be2c7d3917f1c197/content.htm]
    [https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how%20to%20load%20a%20flat%20file%20into%20bw-bps%20using%20sapgui.pdf]
    Regards,
    NR
    Assign points if useful...

  • Scheduling Problem for uploading Data from Flat file to SAP

    Hi guys,
    I am facing a weared problem in uploading some leave records in z table. The code is working fine if we run it through se38 after selecting the file from a shared location from production server which has all the access rights.
    This folder lies in the \usr folder of SAP Production.
    I have kept all the Flat files in the shared path "
    Tis-mum-iz-s1\migration\SAP-INT\leave\" ...
    To give u exact directory structure..
    Tis-mum-iz-s1 is the Server Name
    usr is the SAP System folder used for uploads and downloads
    usr |
    ...-> Migration |
                      -> SAP-INT |
                                 -> leave -> (Flat Files)
    Migration folder is shared with all rights.
    Obviously, we cannot give shared drive as the variant in the scheduler.
    So i use the system path i.e. \usr\sap\tmp\migration\sap-int\leave\ as the variant.
    All my other download programs are working fine with this path as a variant...
    But my this particular upload program does not work with this path....
    I am giving u my code...
    TATA INTERACTIVE SYSTEMS (A Division of TATA INDUSTRIES LIMITED)
    REPORT      :  ZMIGRATE_ZLEAVE
    DESCRIPTION :  To Upload the Leave data. (ZLEAVE)
    CREATED BY  :  Abhishek Bachhawat
    CREATED ON  :  01.09.2005
    CONSULTANT  :  ANAND
    REPORT  ZMIGRATE_ZLEAVE.
    TABLES: ZLEAVE.
    data: begin of wtab,
              MANDT(3),
              ZLVID(8),
              PERNR(8),
              ZSTDT(8),
              ZENDT(8),
              ZDAYS(4),
              AEDAT(8),
              ERDAT(8),
          end of wtab,
          itab like WTAB occurs 0 WITH HEADER LINE.
    data: temp like zleave occurs 0 WITH HEADER LINE.
    SELECTION-SCREEN BEGIN OF BLOCK file
                   WITH FRAME TITLE text-005.
    parameters: file like rlgrap-filename Obligatory.
    Concatenate File SY-DATUM '_Leave.txt' into File.
    SELECTION-SCREEN END OF BLOCK file.
    at SELECTION-SCREEN ON VALUE-REQUEST FOR file .
      CALL FUNCTION 'WS_FILENAME_GET'
        IMPORTING
          FILENAME = file.
      IF SY-SUBRC <> 0.
      ENDIF.
    start-of-selection.
      if file ne space.
        CALL FUNCTION 'WS_UPLOAD'
          EXPORTING
            FILENAME = FILE
            FILETYPE = 'DAT'
          TABLES
            DATA_TAB = ITAB.
      else.
        message e000(zps) with 'Specify a file'.
      endif.
      SORT ITAB BY ZLVID.
      LOOP AT ITAB.
        REFRESH TEMP.
        CLEAR TEMP.
        TEMP-MANDT = sy-mandt.
        TEMP-ERDAT = SY-DATUM.
        TEMP-ZLVID = ITAB-ZLVID.
        TEMP-PERNR = ITAB-PERNR.
        TEMP-ZSTDT = ITAB-ZSTDT.
        TEMP-ZENDT = ITAB-ZENDT.
        TEMP-ZDAYS = ITAB-ZDAYS.
        TEMP-AEDAT = ITAB-AEDAT.
        TEMP-ERDAT = ITAB-ERDAT.
        APPEND TEMP.
        SELECT SINGLE *
               FROM   ZLEAVE
               WHERE  ZLVID = TEMP-ZLVID
               AND    PERNR = TEMP-PERNR.
        IF SY-SUBRC = 0.
          UPDATE ZLEAVE SET ZSTDT = TEMP-ZSTDT
                            ZENDT = TEMP-ZENDT
                            ZDAYS = TEMP-ZDAYS
                            AEDAT = TEMP-AEDAT
                            ERDAT = TEMP-ERDAT
                 WHERE ZLVID = TEMP-ZLVID
                 AND   PERNR = TEMP-PERNR.
        ELSE.
          INSERT ZLEAVE FROM TABLE TEMP.
          COMMIT WORK.
        ENDIF.
      ENDLOOP.

    Hi,
    open dataset file for input in text mode.
    check sy-subrc = 0.
    while sy-subrc = 0.
      read dataset file into wa.
      if sy-subrc = 0.
      append wa to itab.
      else.
        exit.
      endif.
    endwhile.
    close dataset file.
    regards
    Siggi
    PS: check also the F1-help for open, read and close statements!

Maybe you are looking for