Parse tab delimited file after upload

I could use some advice. I have a requirement to parse and insert an uploaded, tab delimited file using APEX. Directly off of the file system, I would do this with an external table, but here I am uploading the file into a CLOB field. Does PL/SQL have any native functions to parse delimited files?
The user will not have access to the data load part of APEX, although I have seen numerous requests on this forum for that functionality to be added to the user GUI. Thoughts?

j,
I wrote this a while ago...
http://www.danielmcghan.us/2009/02/easy-csv-uploads-yes-we-can.html
I've since improved the code a little bit. If you like the solution, I could do a new post with the latest code.
Regards,
Dan
Blog: http://DanielMcGhan.us/
Work: http://SkillBuilders.com/

Similar Messages

  • Upload tab-delimited file from the application server to an internal table

    Hello SAPients.
    I'm using OPEN DATASET..., READ DATASET..., CLOSE DATASET to upload a file from the application server (SunOS). I'm working with SAP 4.6C. I'm trying to upload a tab-delimited file to an internal table but when I try load it the fields are not correctly separated, in fact, they are all misplaced and the table shows '#' where supposedly there was a tab.
    I tried to SPLIT the line using as separator a variable with reference to CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB but for some reason that class doesn't exist in my system.
    Do you know what I'm doing wrong? or Do you know a better method to upload a tab-delimited file into an internal table?
    Thank you in advance for your help.

    Try:
    REPORT ztest MESSAGE-ID 00.
    PARAMETER: p_file LIKE rlgrap-filename   OBLIGATORY.
    DATA: BEGIN OF data_tab OCCURS 0,
          data(4096),
          END   OF data_tab.
    DATA: BEGIN OF vendor_file_x OCCURS 0.
    * LFA1 Data
    DATA: mandt  LIKE bgr00-mandt,
          lifnr  LIKE blf00-lifnr,
          anred  LIKE blfa1-anred,
          bahns  LIKE blfa1-bahns,
          bbbnr  LIKE blfa1-bbbnr,
          bbsnr  LIKE blfa1-bbsnr,
          begru  LIKE blfa1-begru,
          brsch  LIKE blfa1-brsch,
          bubkz  LIKE blfa1-bubkz,
          datlt  LIKE blfa1-datlt,
          dtams  LIKE blfa1-dtams,
          dtaws  LIKE blfa1-dtaws,
          erdat  LIKE  lfa1-erdat,
          ernam  LIKE  lfa1-ernam,
          esrnr  LIKE blfa1-esrnr,
          konzs  LIKE blfa1-konzs,
          ktokk  LIKE  lfa1-ktokk,
          kunnr  LIKE blfa1-kunnr,
          land1  LIKE blfa1-land1,
          lnrza  LIKE blfa1-lnrza,
          loevm  LIKE blfa1-loevm,
          name1  LIKE blfa1-name1,
          name2  LIKE blfa1-name2,
          name3  LIKE blfa1-name3,
          name4  LIKE blfa1-name4,
          ort01  LIKE blfa1-ort01,
          ort02  LIKE blfa1-ort02,
          pfach  LIKE blfa1-pfach,
          pstl2  LIKE blfa1-pstl2,
          pstlz  LIKE blfa1-pstlz,
          regio  LIKE blfa1-regio,
          sortl  LIKE blfa1-sortl,
          sperr  LIKE blfa1-sperr,
          sperm  LIKE blfa1-sperm,
          spras  LIKE blfa1-spras,
          stcd1  LIKE blfa1-stcd1,
          stcd2  LIKE blfa1-stcd2,
          stkza  LIKE blfa1-stkza,
          stkzu  LIKE blfa1-stkzu,
          stras  LIKE blfa1-stras,
          telbx  LIKE blfa1-telbx,
          telf1  LIKE blfa1-telf1,
          telf2  LIKE blfa1-telf2,
          telfx  LIKE blfa1-telfx,
          teltx  LIKE blfa1-teltx,
          telx1  LIKE blfa1-telx1,
          xcpdk  LIKE  lfa1-xcpdk,
          xzemp  LIKE blfa1-xzemp,
          vbund  LIKE blfa1-vbund,
          fiskn  LIKE blfa1-fiskn,
          stceg  LIKE blfa1-stceg,
          stkzn  LIKE blfa1-stkzn,
          sperq  LIKE blfa1-sperq,
          adrnr  LIKE  lfa1-adrnr,
          mcod1  LIKE  lfa1-mcod1,
          mcod2  LIKE  lfa1-mcod2,
          mcod3  LIKE  lfa1-mcod3,
          gbort  LIKE blfa1-gbort,
          gbdat  LIKE blfa1-gbdat,
          sexkz  LIKE blfa1-sexkz,
          kraus  LIKE blfa1-kraus,
          revdb  LIKE blfa1-revdb,
          qssys  LIKE blfa1-qssys,
          ktock  LIKE blfa1-ktock,
          pfort  LIKE blfa1-pfort,
          werks  LIKE blfa1-werks,
          ltsna  LIKE blfa1-ltsna,
          werkr  LIKE blfa1-werkr,
          plkal  LIKE  lfa1-plkal,
          duefl  LIKE  lfa1-duefl,
          txjcd  LIKE blfa1-txjcd,
          sperz  LIKE  lfa1-sperz,
          scacd  LIKE blfa1-scacd,
          sfrgr  LIKE blfa1-sfrgr,
          lzone  LIKE blfa1-lzone,
          xlfza  LIKE  lfa1-xlfza,
          dlgrp  LIKE blfa1-dlgrp,
          fityp  LIKE blfa1-fityp,
          stcdt  LIKE blfa1-stcdt,
          regss  LIKE blfa1-regss,
          actss  LIKE blfa1-actss,
          stcd3  LIKE blfa1-stcd3,
          stcd4  LIKE blfa1-stcd4,
          ipisp  LIKE blfa1-ipisp,
          taxbs  LIKE blfa1-taxbs,
          profs  LIKE blfa1-profs,
          stgdl  LIKE blfa1-stgdl,
          emnfr  LIKE blfa1-emnfr,
          lfurl  LIKE blfa1-lfurl,
          j_1kfrepre  LIKE blfa1-j_1kfrepre,
          j_1kftbus   LIKE blfa1-j_1kftbus,
          j_1kftind   LIKE blfa1-j_1kftind,
          confs  LIKE  lfa1-confs,
          updat  LIKE  lfa1-updat,
          uptim  LIKE  lfa1-uptim,
          nodel  LIKE blfa1-nodel.
    DATA: END   OF vendor_file_x.
    FIELD-SYMBOLS:  <field>,
                    <field_1>.
    DATA: delim          TYPE x        VALUE '09'.
    DATA: fld_chk(4096),
          last_char,
          quote_1     TYPE i,
          quote_2     TYPE i,
          fld_lth     TYPE i,
          columns     TYPE i,
          field_end   TYPE i,
          outp_rec    TYPE i,
          extras(3)   TYPE c        VALUE '.,"',
          mixed_no(14) TYPE c        VALUE '1234567890-.,"'.
    OPEN DATASET p_file FOR INPUT.
    DO.
      READ DATASET p_file INTO data_tab-data.
      IF sy-subrc = 0.
        APPEND data_tab.
      ELSE.
        EXIT.
      ENDIF.
    ENDDO.
    * count columns in output structure
    DO.
      ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      columns = sy-index.
    ENDDO.
    * Assign elements of input file to internal table
    CLEAR vendor_file_x.
    IF columns > 0.
      LOOP AT data_tab.
        DO columns TIMES.
          ASSIGN space TO <field>.
          ASSIGN space TO <field_1>.
          ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
          SEARCH data_tab-data FOR delim.
          IF sy-fdpos > 0.
            field_end = sy-fdpos + 1.
            ASSIGN data_tab-data(sy-fdpos) TO <field_1>.
    * Check that numeric fields don't contain any embedded " or ,
            IF <field_1> CO mixed_no AND
               <field_1> CA extras.
              TRANSLATE <field_1> USING '" , '.
              CONDENSE <field_1> NO-GAPS.
            ENDIF.
    * If first and last characters are '"', remove both.
            fld_chk = <field_1>.
            IF NOT fld_chk IS INITIAL.
              fld_lth = strlen( fld_chk ) - 1.
              MOVE fld_chk+fld_lth(1) TO last_char.
              IF fld_chk(1) = '"' AND
                 last_char = '"'.
                MOVE space TO fld_chk+fld_lth(1).
                SHIFT fld_chk.
                MOVE fld_chk TO <field_1>.
              ENDIF.       " for if fld_chk(1)=" & last_char="
            ENDIF.         " for if not fld_chk is initial
    * Replace "" with "
            DO.
              IF fld_chk CS '""'.
                quote_1 = sy-fdpos.
                quote_2 = sy-fdpos + 1.
                MOVE fld_chk+quote_2 TO fld_chk+quote_1.
              ELSE.
                MOVE fld_chk TO <field_1>.
                EXIT.
              ENDIF.
            ENDDO.
            <field> = <field_1>.
          ELSE.
            field_end = 1.
          ENDIF.
          SHIFT data_tab-data LEFT BY field_end PLACES.
        ENDDO.
        APPEND vendor_file_x.
        CLEAR vendor_file_x.
      ENDLOOP.
    ENDIF.
    CLEAR   data_tab.
    REFRESH data_tab.
    FREE    data_tab.
    Rob

  • UPLOADING tab delimited file onto FTP server

    Hello all
    Can i upload a tab delimited file onto the FTP server. If yes then how
    points guranteed if answered!!

    Hi,
    Yes you can do this one .. you can have a look at the standard program 'RSEPSFTP'.
    REPORT ZFTPSAP LINE-SIZE 132.
    DATA: BEGIN OF MTAB_DATA OCCURS 0,
    LINE(132) TYPE C,
    END OF MTAB_DATA.
    DATA: MC_PASSWORD(20) TYPE C,
    MI_KEY TYPE I VALUE 26101957,
    MI_PWD_LEN TYPE I,
    MI_HANDLE TYPE I.
    START-OF-SELECTION.
    *-- Your SAP-UNIX FTP password (case sensitive)
    MC_PASSWORD = 'password'.
    DESCRIBE FIELD MC_PASSWORD LENGTH MI_PWD_LEN.
    *-- FTP_CONNECT requires an encrypted password to work
    CALL 'AB_RFC_X_SCRAMBLE_STRING'
         ID 'SOURCE' FIELD MC_PASSWORD ID 'KEY' FIELD MI_KEY
         ID 'SCR' FIELD 'X' ID 'DESTINATION' FIELD MC_PASSWORD
         ID 'DSTLEN' FIELD MI_PWD_LEN.
    CALL FUNCTION 'FTP_CONNECT'
         EXPORTING
    *-- Your SAP-UNIX FTP user name (case sensitive)
           USER            = 'userid'
           PASSWORD        = MC_PASSWORD
    *-- Your SAP-UNIX server host name (case sensitive)
           HOST            = 'unix-host'
           RFC_DESTINATION = 'SAPFTP'
         IMPORTING
           HANDLE          = MI_HANDLE
         EXCEPTIONS
           NOT_CONNECTED   = 1
           OTHERS          = 2.
    CHECK SY-SUBRC = 0.
    CALL FUNCTION 'FTP_COMMAND'
         EXPORTING
           HANDLE = MI_HANDLE
           COMMAND = 'dir'
         TABLES
           DATA = MTAB_DATA
         EXCEPTIONS
           TCPIP_ERROR = 1
           COMMAND_ERROR = 2
           DATA_ERROR = 3
           OTHERS = 4.
    IF SY-SUBRC = 0.
      LOOP AT MTAB_DATA.
        WRITE: / MTAB_DATA.
      ENDLOOP.
    ELSE.
    * do some error checking.
      WRITE: / 'Error in FTP Command'.
    ENDIF.
    CALL FUNCTION 'FTP_DISCONNECT'
         EXPORTING
           HANDLE = MI_HANDLE
         EXCEPTIONS
           OTHERS = 1.  
    Regards
    Sudheer

  • Detecting line-breaks within a column of an uploaded tab-delimited file.

    Suppose you upload a tab-delimited file from your laptop and split each row of the file into some structure that you append to an itab.
    Is there a way inside ABAP to detect that a field of the uploaded file has a CR or CRLF in it?  And if so, where it is ?
    Thanks in advance ...

    You can use any of the following for those char.
    DATA:   head_crnl(1)   TYPE c VALUE cl_abap_char_utilities=>cr_lf,
              top_crnl(1)    TYPE c VALUE cl_abap_char_utilities=>cr_lf,
              end_crnl(1)    TYPE c VALUE cl_abap_char_utilities=>cr_lf,
              blank_crnl(1)  TYPE c VALUE cl_abap_char_utilities=>cr_lf,
              final_crnl(1)  TYPE c VALUE cl_abap_char_utilities=>cr_lf,
              first_pgbr(1)  TYPE c VALUE cl_abap_char_utilities=>form_feed.
    Declare the above variables and check if they occur in the file. Hope this helps.

  • Functions to upload UNIX tab-delimited file

    plz tell me  lists of  Functions to upload UNIX tab-delimited file in the database table

    HI,
    data : itab like standard table of ZCBU.
    ld_file = p_infile.
    OPEN DATASET ld_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    IF sy-subrc NE 0.
    ELSE.
      DO.
        CLEAR: wa_string, wa_uploadtxt.
        READ DATASET ld_file INTO wa_string.
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
                                          wa_uploadtxt-name2
                                          wa_uploadtxt-age.
          MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
          APPEND wa_upload TO it_record.
        ENDIF.
      ENDDO.
      CLOSE DATASET ld_file.
    ENDIF.
    loop at it_record.
    itab-field1 = it_reocrd-field1.
    itab-field2 = it_record-field2.
    append itab.
    endloop.
    *-- Now update the table
    modify ZCBU from table itab.

  • Upload a tab delimited file

    anybody knws wats the fm to upload a tab delimited file?
    I tried the fm gui_upload but its not working

    I have used the  HAS_FIELD_SEPARATOR           = 'X', but only the first column is being transferred
    data : p_file2 type string.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      = p_file2
       FILETYPE                      = 'ASC'
       HAS_FIELD_SEPARATOR           = 'X'
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                    =
      HEADER                        =
      TABLES
        DATA_TAB                      = gi_linepost
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.

  • Program to upload data from a tab-delimited file ...

    I have to upload data from a tab-delimited file with following fields into database table(ZCBU) with same fields:
    CBU (parent)
    KUNNR (child)
    ERDAT (effective from)
    MANDT (client)
    SFID (salesforce ID)
    AEDAT (effective to)
    AENAM (assigned by).
    This file can be of type PC(txt) or UNIX.
    plz tell me how to do this in both type of files

    Hi,
    DATA: bdcdata LIKE bdcdata    OCCURS 0 WITH HEADER LINE.
    DATA: xfile TYPE string.
    DATA: BEGIN OF itab OCCURS 0,
            empno TYPE zmemp-empno,
            name  TYPE zmemp-first_name,
            last  TYPE zmemp-last_name,
            comp  TYPE zmemp-comp,
            place TYPE zmemp-place,
            END OF itab.
    PARAMETER : p_file TYPE rlgrap-filename OBLIGATORY.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
    Form to get the file path of legacy data stored on presentation server
      PERFORM get_file_path.
    START-OF-SELECTION.
      MOVE p_file TO xfile.
    to get the data from excel sheet data into an internal table
      PERFORM get_data.
      LOOP AT itab .
        REFRESH bdcdata.
        PERFORM bdc_dynpro      USING 'ZM_EMPLOYEE' '9001'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'S9001_EMPNO'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=CREA'.
        PERFORM bdc_field       USING 'S9001_EMPNO'
                                      itab-empno.
        PERFORM bdc_dynpro      USING 'ZM_EMPLOYEE' '9002'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'S9002_PLACE'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=SAVE'.
        PERFORM bdc_field       USING 'S9002_EMPNO'
                                      itab-empno.
        PERFORM bdc_field       USING 'S9002_FIRST_NAME'
                                      itab-name.
        PERFORM bdc_field       USING 'S9002_LAST_NAME'
                                      itab-last.
        PERFORM bdc_field       USING 'S9002_COMP'
                                      itab-comp.
        PERFORM bdc_field       USING 'S9002_PLACE'
                                      itab-place.
        PERFORM bdc_dynpro      USING 'ZM_EMPLOYEE' '9001'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'S9001_EMPNO'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=BACK'.
        CALL TRANSACTION 'ZMEMP'
        USING bdcdata
        UPDATE 'A'
        MODE   'N'.
      ENDLOOP.
           Start new screen                                              *
    FORM bdc_dynpro USING program dynpro.
      CLEAR bdcdata.
      bdcdata-program  = program.
      bdcdata-dynpro   = dynpro.
      bdcdata-dynbegin = 'X'.
      APPEND bdcdata.
    ENDFORM.                    "BDC_DYNPRO
           Insert field                                                  *
    FORM bdc_field USING fnam fval.
      CLEAR bdcdata.
      bdcdata-fnam = fnam.
      bdcdata-fval = fval.
      APPEND bdcdata.
    ENDFORM.                    "BDC_FIELD
    *&      Form  get_file_path
    FORM get_file_path .
      CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
        CHANGING
          file_name = p_file.
    ENDFORM.                    " get_file_path
    *&      Form  get_data
    FORM get_data .
      DATA : lines1 TYPE i.
      MOVE p_file TO xfile.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename            = xfile
          filetype            = 'ASC'
          has_field_separator = 'X'
        TABLES
          data_tab            = itab.
      DESCRIBE TABLE itab LINES lines1.
      WRITE : / lines1 , 'REcords uploaded' .
    ENDFORM.                    " get_data
    Regards,
    Nihar Swain,

  • Large line to internal tables from  tab delimited file

    Dear All
    I am trying to upload the large file of tab delimited data into a SAP internal table. I am basically stuck with the fact that there are multiple lines and multiple columns in tab delimited file. There are around 300 columns which are tab delimited and separated
    For e.g  (* indicates tab)
    1material*****************1**9888**********5**********34*********3*********346************************-->upto 5000 columns
    1material*****************1**99338************4***********************************6************7************-->upto 5000 columns
    1material*****************1**22888********************5*********7*********************6*****7**************-->upto 5000 columns
    1material*****************1**44844************************5***5*********************************************-->upto 5000 columns
    1material***********34****1**54*******33********33*****33**************************************************-->upto 5000 columns
    1material*****************1**99888*****************************************************************************-->upto 5000 columns
    below upto 500 rows or more
    I want to read this file into a columner internal table.
    I am trying several ways . I have file on APP server. However Line breaks after 1024 characters or comes on another line.
    Currently I am not able to load it in single line of internal table. The structure of file is dynamic .. not static
    Amol

    Hi Amolsonaikar,
    you may try like this:
    TYPES:
      begin of line,
        t_field type table of string,
      end of line,
      t_line type table of line.
    DATA:
      lt_line  type t_line,
      lv_line type string,
      lt_field type table of string.
    open dataset 'XYZ' for input in text mode encoding default.
    while sy-subrc = 0.
      read dataset into lv_line.
      split lv_line at '|' into lt_field.
      append lt_field to lt_line.
    endwhile.
    Regards,
    Clemens

  • Read Tab delimited File from Application server

    Hi Experts,
    I am facing problem while reading file from Application server.
    File in Application server is stored as follows, The below file is a tab delimited file.
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    i have downloaded this file from Application server using Transaction CG3Y. the Downloaded file is a tab delimited file and i could not see "#' in the file,
    The code is as Below.
    c_split  TYPE abap_char1 VALUE cl_abap_char_utilities=>horizontal_tab.
    here i am using IGNORING CONVERSION ERRORS in order to avoid Conversion Error Short Dump.
    OPEN DATASET wa_filename-file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS.
          IF sy-subrc = 0.
            WRITE : /,'...Processing file - ', wa_filename-file.   
           DO.
          Read the contents of file
              READ DATASET wa_filename-file INTO wa_file-data.
              IF sy-subrc = 0.
                SPLIT wa_file-data AT c_split INTO wa_adrc_2-kunnr
                                                   wa_adrc_2-title
                                                   wa_adrc_2-name1
                                                   wa_adrc_2-name2
                                                   wa_adrc_2-name3
                                                   wa_adrc_2-name4
                                                   wa_adrc_2-name_co
                                                   wa_adrc_2-city1
                                                   wa_adrc_2-city2
                                                   wa_adrc_2-regiogroup
                                                   wa_adrc_2-post_code1
                                                   wa_adrc_2-post_code2
                                                   wa_adrc_2-po_box
                                                   wa_adrc_2-po_box_loc
                                                   wa_adrc_2-transpzone
                                                   wa_adrc_2-street
                                                   wa_adrc_2-house_num1
                                                   wa_adrc_2-house_num2
                                                   wa_adrc_2-str_suppl1
                                                   wa_adrc_2-str_suppl2
                                                   wa_adrc_2-country
                                                   wa_adrc_2-langu
                                                   wa_adrc_2-region
                                                   wa_adrc_2-sort1
                                                   wa_adrc_2-sort2
                                                   wa_adrc_2-deflt_comm
                                                   wa_adrc_2-tel_number
                                                   wa_adrc_2-tel_extens
                                                   wa_adrc_2-fax_number
                                                   wa_adrc_2-fax_extens
                                                   wa_adrc_2-taxjurcode.
    WA_FILE-DATA is having below values
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    And this is split by tab delimited and moved to other variables as shown above.
    Please guide me how to read the contents without "#' from the file.
    I have tried all possible ways and unable to get solution.
    Thanks,
    Shrikanth

    Hi ,
    In ECC 6 if all the unicode patches are applied then UTF 16 will defintly work..
    More over i would suggest you to ist replace # with some other  * or , and then try to see in debugging if any further  # appears..
    and no # appears then try to split now.
    if even now the # appears after replace statement then try to find out what exactly is it... wheather it is a horizantal tab etc....
    and then again try to replace it and then split..
    Please follow the process untill all the # are replaced...
    This should work for you..
    Let me know if you further face any issue...
    Regards
    Satish Boguda

  • Text Tab delimited file-Strcuture-validation

    Hi Folks,
    1.Is there a way to check whether the file is text tab delimited or not,programatically ?
    2.can we check whether the data in the text tab delimited file is in line with the structure of the internal table into which it is going to be uploaded ?
    Thanks,
    K.Kiran.

    hi try this
    DO.
    READ DATASET p_ufile INTO in_file.
    IF sy-subrc <> 0.
    EXIT.
    ENDIF.
    APPEND in_file.
    CLEAR in_file.
    ENDDO.
    CLOSE DATASET p_ufile.
    LOOP AT in_file.
    SPLIT in_file AT c_tab INTO
    wa_citm_b-type
    wa_citm_b-vbeln
    wa_citm_b-posnr
    wa_citm_b-uepos.
    wa_citm_b-matnr
    lv_menge
    wa_citm_b-arktx
    wa_citm_b-vbegdat
    wa_citm_b-venddat
    wa_citm_b-prctr
    wa_citm_b-zterm
    wa_citm_b-faksp
    wa_citm_b-taxm1
    wa_citm_b-vlaufz
    wa_citm_b-vlauez
    wa_citm_b-vlaufk
    wa_citm_b-vkuegru
    wa_citm_b-bstkd
    wa_citm_b-bstdk
    wa_citm_b-posex
    wa_citm_b-bstkd_e
    wa_citm_b-bstdk_e
    wa_citm_b-period.
    IF NOT wa_citm_b-posnr CA sy-abcde.
    APPEND wa_citm_b TO lt_citm_b.
    ENDIF.
    ENDLOOP.

  • Tab Delimited File Using Receiver FCC

    Hi Experts,
    I need to generate a Tab Delimited File Using Receiver FCC. I have achieved the same by copying the tab length from a notepad and used the same in name.fieldSeparator parameter.
    The other below mentioned values for name.fieldSeparator didnu2019t produce the expected result:
    name.fieldSeparator=u2019htu2019
    name.fieldSeparator=u2019\tu2019
    name.fieldSeparator=u2019 0x09u2019
    So just need to know if my approach for achieving this is fine or not.
    Thanking you in advance.
    Aditya.

    Verma,
    name.fieldSeparator=u2019 0x09u2019
    I see a space after the first comma. Try removing it and give a try.
    '0x09' with no spaces.
    Regards,
    ---Satish

  • Trouble importing contacts from a tab-delimited file into Contacts on 10.9.5

    I am having Trouble importing contacts from a tab-delimited file into Contacts on MacBook Pro, OS 10.9.5, Intel 2.4 GHz Core 2 Duo., 400GB drive, 4GB memory DDR3
    So far I have:
    - Followed closely the help screen in Contacts app
    - Modified my source document meticulously - first in MS Word, then copied and tweaked in Apple's "Text Edit" app
    - The problem arises when I go to access the document for import.  Specifically, the target document, and all others in that file, are "greyed out" and thus can't be "opened" to facilitate the import process
    - Tried changing the extension of the document name to ".txt", ".rtf", and ".rtd". No change or improvement.
    - Searched Apple.com/help and found nothing relevant
    Can anyone offer some advice or tell me what I may be overlooking in this process?
    Any help will be greatly appreciated!
    Thanks,
    <Email Edited By Host>

    Hi Rammohan,
    Thanks for the effort!
    But I don't need to use GUI upload because my functionality does not require to fetch data from presentation server.
    Moreover, the split command advised by you contains separate fields...f1, f2, f3... and I cannot use it because I have 164 fields.  I will have to split into 164 fields and assign the values back to 164 fields in the work area/header line.
    Moreover I have about 10 such work areas.  so the effort would be ten times the above effort! I want to avoid this! Please help!
    I would be very grateful if you could provide an alternative solution.
    Thanks once again,
    Best Regards,
    Vinod.V

  • Feature request : import of multi-line notes in CSV or tab-delimited file

    This is a feature request to the AddressBook development team:
    Following a question posted here and asked to support (thanks to Barry and Julia from ADC), it turns out that AddressBook is not ready yet to import multi-line notes from a CSV or tab-delimited file.
    I would like to kindly suggest the AddressBook development team to update the import methods for such flat files to parse the text of what is assigned to the Notes field, and convert documented escape sequences to their pre-determined corresponding special character.
    I would think that the following would be pretty easy to implement, and easy for the targeted audience (enthousiasts power-users and up) to remember:
    "\n " : mark a new line (same as ^p in Word)
    "\t " : replace by a tab
    I would keep the trailing space to avoid problems with DOS-type file path problems.
    and of course...
    "\\n " and "\\t " for whoever would want to import notes about \n and \t
    For example, in a Notes column, the text:
    "Escape sequence \\n :\n move the text to a new line\n \n while escape sequence \\t :\t move the text to the next horizontal tab"
    would give the following (just consider text between <Note> marks):
    <Note>
    Escape sequence \n:
    move the text to a new line
    while escape sequence \t: move the text to the next horizontal tab
    </Note>
    Thanks for reading...
    Frédéric Denis

    The developers do not necessarily read these USER discussion pages. If you want to make suggestions and know they will be read (which is at least a step towards implementation) use the OS X feedback page.
    AK

  • Attaching a tab delimited file to mail

    Hi,
    I have to attach a tab delimited file to a mail and sent it using Function Module 'SO_DOCUMENT_SEND_API1'.
    I have filled in the following details but i am not sure about what should be given for 'lt_packing_list-doc_type'. Should it be a 'TXT 'or any other file format. Some one has suggested me a 'CSV' but i guess for that the data should be separated by a comma and i need a tab delimited file.
    Please suggest me what needs to be done.
    Thank you,
    Taher
      DATA:
            con_tab  TYPE c VALUE cl_abap_char_utilities=>horizontal_tab,
            con_cret TYPE c VALUE cl_abap_char_utilities=>cr_lf.
      LOOP AT gt_output INTO ls_output.
        CONCATENATE ls_output-email
                    ls_output-pos
                    ls_output-txt
                    ls_output-func
                    ls_output-mail
                 INTO gt_attach SEPARATED BY con_tab.
        CONCATENATE con_cret gt_attach  INTO gt_attach.
        CONDENSE gt_attach.
        APPEND  gt_attach.
    * Fill the document data.
      lv_xdocdata-doc_size = 1.
    * Populate the subject/generic message attributes
      lv_xdocdata-obj_langu = sy-langu.
      lv_xdocdata-obj_name  = 'SAPRPT'.
      lv_xdocdata-obj_descr = 'Report' .
    * Fill the document data and get size of attachment
      CLEAR lv_xdocdata.
      READ TABLE gt_attach INDEX lv_xcnt.
      lv_xdocdata-doc_size =
         ( lv_xcnt - 1 ) * 255 + STRLEN( gt_attach ).
      lv_xdocdata-obj_langu  = sy-langu.
      lv_xdocdata-obj_name   = 'SAPRPT'.
      lv_xdocdata-obj_descr  = ' Report'.
      CLEAR lt_attachment.  REFRESH lt_attachment.
      lt_attachment[] = gt_attach[].
    * Describe the body of the message
      CLEAR lt_packing_list.  REFRESH lt_packing_list.
      lt_packing_list-transf_bin = space.
      lt_packing_list-head_start = 1.
      lt_packing_list-head_num = 0.  lt_packing_list-body_start = 1.
      DESCRIBE TABLE lt_message LINES lt_packing_list-body_num.
      lt_packing_list-doc_type = 'RAW'.
      APPEND lt_packing_list.
    * Create attachment notification
      lt_packing_list-transf_bin = 'X'.
      lt_packing_list-head_start = 1.
      lt_packing_list-head_num   = 1.
      lt_packing_list-body_start = 1.
      DESCRIBE TABLE lt_attachment LINES lt_packing_list-body_num.
      lt_packing_list-doc_type   =  ?????. "TXT or any other format
      lt_packing_list-obj_descr  =  'Report'.
      lt_packing_list-obj_name   =  'Report'.
      lt_packing_list-doc_size   =  lt_packing_list-body_num * 255.
      APPEND lt_packing_list.
    * Add the recipients email address
      LOOP AT gt_receivers INTO ls_receivers.
        CLEAR lt_receivers.
        lt_receivers-receiver = ls_receivers.
        lt_receivers-rec_type = 'U'.
        lt_receivers-com_type = 'INT'.
        APPEND lt_receivers.
      ENDLOOP.

    Hi Taher,
    Which table you populate your data to (CONTENTS_BIN,CONTENTS_TXT,CONTENTS_HEX) ?.
    As both the message and attachment reside one table (contents_txt) in it_packing_list body of message should start with 1 line, but of an attachment when body of message ends. This is:
    * Describe the body of the message
      CLEAR lt_packing_list.  REFRESH lt_packing_list.
      lt_packing_list-transf_bin = space.
      lt_packing_list-head_start = 1.
      lt_packing_list-head_num = 0.  lt_packing_list-body_start = 1.
      DESCRIBE TABLE lt_message LINES lt_packing_list-body_num.
      lt_packing_list-doc_type = 'RAW'.
      APPEND lt_packing_list.
    * Create attachment notification
      lt_packing_list-transf_bin = 'X'.
      lt_packing_list-head_start = 1.
      lt_packing_list-head_num   = 1.
      lt_packing_list-body_start =  "start of attachemnt with next line after body of message
    Next thing is what I appointed before. Both should be passed as ASCII so in both  lt_packing_list-transf_bin = space.
    Please see my code.
    CONSTANTS: con_cret(2) type c VALUE cl_abap_char_utilities=>cr_lf,
               con_tab(2)  TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.
    DATA: doc_attr TYPE sodocchgi1,
          it_packing_list TYPE TABLE OF sopcklsti1 WITH HEADER LINE,
          it_message TYPE TABLE OF solisti1 WITH HEADER LINE,
          it_attachment TYPE TABLE OF solisti1 WITH HEADER LINE,
          it_txt TYPE TABLE OF solisti1 WITH HEADER LINE,
          it_receivers TYPE TABLE OF somlreci1 WITH HEADER LINE.
    "Document attributes
    doc_attr-obj_name = 'EXT_MAIL'.
    doc_attr-obj_descr = 'External test mail'.   "title
    doc_attr-obj_langu = sy-langu.
    doc_attr-sensitivty = 'F'.      "functional message
    "Message in ASCII (txt) format
    APPEND 'This is body message' TO it_message.   "message is from 1 to 2
    APPEND 'and second line for message.' TO it_message.
    APPEND LINES OF it_message TO it_txt.
    "Determine how data are distrtibuted to document and attachment
    "First line in it_packing describes message body
    it_packing_list-transf_bin = space.  "ASCII format
    it_packing_list-body_start = 1.      "message body starts from 1st line
    DESCRIBE TABLE it_message LINES it_packing_list-body_num.  "lines in it_txt table for message body
    it_packing_list-doc_type = 'RAW'.
    APPEND it_packing_list.
    "Attachment in ASCII (txt) format
    CONCATENATE '1first' 'second' 'third' 'fourth' into it_attachment SEPARATED BY con_tab.   "first line
    CONCATENATE con_cret it_attachment.
    APPEND it_attachment.
    CONCATENATE '2first' 'second' 'third' 'fourth' into it_attachment SEPARATED BY con_tab. "second line
    CONCATENATE it_attachment con_cret into it_attachment.
    APPEND it_attachment.
    APPEND LINES OF it_attachment TO it_txt.
    "Further lines in it_packing describe attachment
    CLEAR it_packing_list.
    it_packing_list-transf_bin = space.  "ASCII format
    it_packing_list-body_start = 3.
    DESCRIBE TABLE it_attachment LINES it_packing_list-body_num.
    it_packing_list-doc_type = 'txt'.
    it_packing_list-obj_name = 'Test attachment'.
    it_packing_list-obj_descr = 'Title of an attachment'.
    it_packing_list-obj_langu = sy-langu.
    "size od attachment = length of last line + all remaining lines * 255
    READ TABLE it_attachment INDEX it_packing_list-body_num.
    it_packing_list-doc_size = STRLEN( it_attachment ) + 255 * ( it_packing_list-body_num - 1 ).
    APPEND it_packing_list.
    "Receivers
    it_receivers-receiver = "some mail address.
    it_receivers-rec_type = 'U'.  "internet address
    it_receivers-com_type = 'INT'. "send via internet
    APPEND it_receivers.
    CALL FUNCTION 'SO_NEW_DOCUMENT_ATT_SEND_API1'
      EXPORTING
        document_data                    = doc_attr
        put_in_outbox                    = 'X'
        commit_work                      = 'X'
    * IMPORTING
    *   SENT_TO_ALL                      =
    *   NEW_OBJECT_ID                    =
      TABLES
        packing_list                     = it_packing_list
    *   OBJECT_HEADER                    =
    *   CONTENTS_BIN                     =
       contents_txt                     = it_txt
    *   CONTENTS_HEX                     =
    *   OBJECT_PARA                      =
    *   OBJECT_PARB                      =
        receivers                        = it_receivers.
    * EXCEPTIONS
    *   TOO_MANY_RECEIVERS               = 1
    *   DOCUMENT_NOT_SENT                = 2
    *   DOCUMENT_TYPE_NOT_EXIST          = 3
    *   OPERATION_NO_AUTHORIZATION       = 4
    *   PARAMETER_ERROR                  = 5
    *   X_ERROR                          = 6
    *   ENQUEUE_ERROR                    = 7
    *   OTHERS                           = 8
    IF sy-subrc = 0.
      WAIT UP TO 2 SECONDS.
      SUBMIT rsconn01 WITH mode = 'INT'
    *                    WITH ouput = 'X'
                      AND RETURN.
    ENDIF.
    I am using SO01 t-code to see the document and attachemnt.
    I am sure when you get rid of the differences with the codes you will get desired result.
    Regards
    Marcin

  • To import addresses, need format of tab delimited file

    I am trying to import addresses from my G3 PB Mozilla App. I exported addresses on G3 to both ldif and tab-delimited files and then attached to an email message which I sent to myself. I received email on Mac mini, but the 2 files showed up inline, not as attachments. I copied the ldif portion to a text file; ditto for tab-delimited. I tried importing into Address Book, but got message that format was not valid.
    Can't I just edit the tab-delimited file in TextEdit, save as RTF, and get Address Book to recognize the data?
    Thanks,
    Owen

    Hi, Did you get solution for this? Please let me know as I'm looking for the same solution. The Bank requirement is to generate a Tab delimited file but the RFFOGB_T with Format GB_BACS issues the output as below...
    Required by Bank:
    692532 73855963 RRS P R BACKLEY         169.91 GSLV
    294518 99855581 CETS PRITECTIIN         799.72 GSLV
    The output I get from SAP/DMEE:
    ........1........2........3........4........5........6....
    VOL1000004                           ....953312
                  1 <CR/LF>
    HDR1A953312S  195331200000400010001       10040 100420000000
                    <CR/LF>
    HDR2F0200000100                                   00
                    <CR/LF>
    UHL1 10041999999    000000001 DAILY  000
                    <CR/LF>
    6010392865540009960062063474662    00000115000ABC UK Ltd.       0
    1465                                <CR/LF>
    6006206347466201760062063474662    00000115000SAPBACS0000003306 C
    ONTRA            ABC UK LTD.        <CR/LF>
    EOF1A953312S  195331200000400010001       10040 100420000000
                    <CR/LF>
    EOF2F0200000100                                   00
                    <CR/LF>
    UTL10000000115000000000011500000000010000001
                    <CR/LF>
    END
                    <CR/LF>
    <END>
    Rgds,
    Stan

Maybe you are looking for

  • How can I tell if a user has used IMAP?

    How can I tell if a user has used IMAP? <P> Check their mailbox for a file called "__VALIDITY__" If they have this file in their mailbox, they've used IMAP.

  • External Monitor no longer works after 10.5.8 Update

    I just downloaded 10.3.8 update and my Fujitsu external monitor no longer works with my Imac 24". The same thing has happened to a friend who has the same iMAC and similar external monitor. Is this Apples way to get us to use Apple products only or i

  • Problem with Mavericks 10.9.1 e Java Se 7  u 45

    Java SE 7 no longer works. Message in console: 07/01/14 10:59:09,221 java[1172]: objc[1172]: Class JavaLaunchHelper is implemented in both /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java and /Library/Internet Plug-Ins/JavaAp

  • XML file to Proxy Scenario with BPM - ID Steps

    Hi Guys, Any one can help me to find  about the scenario XML file to ABAP Proxy with BPM, ID steps please. Any link or logical routing, receiver determination and interface determination idea. Please correct me, I have created Business Service as sen

  • I cannot believe you don sell 17 inches mac book pro anymore!!!!!

    really I cant believe you don t have that alternative anymore, a Monitor is not portable!!! and for prhotographers a 17 mac book pro is a must to show your work