DMEE tab-delimited file required

Dear Experts
with trx DMEE you define file formats to be used as payment medium
on screen 'format attributes' you indicate if fields have a fixed length or are delimited with a character
what to do when you want to have a tab-delimited file?
Regards Ben

Hi, Did you get solution for this? Please let me know as I'm looking for the same solution. The Bank requirement is to generate a Tab delimited file but the RFFOGB_T with Format GB_BACS issues the output as below...
Required by Bank:
692532 73855963 RRS P R BACKLEY         169.91 GSLV
294518 99855581 CETS PRITECTIIN         799.72 GSLV
The output I get from SAP/DMEE:
........1........2........3........4........5........6....
VOL1000004                           ....953312
              1 <CR/LF>
HDR1A953312S  195331200000400010001       10040 100420000000
                <CR/LF>
HDR2F0200000100                                   00
                <CR/LF>
UHL1 10041999999    000000001 DAILY  000
                <CR/LF>
6010392865540009960062063474662    00000115000ABC UK Ltd.       0
1465                                <CR/LF>
6006206347466201760062063474662    00000115000SAPBACS0000003306 C
ONTRA            ABC UK LTD.        <CR/LF>
EOF1A953312S  195331200000400010001       10040 100420000000
                <CR/LF>
EOF2F0200000100                                   00
                <CR/LF>
UTL10000000115000000000011500000000010000001
                <CR/LF>
END
                <CR/LF>
<END>
Rgds,
Stan

Similar Messages

  • To import addresses, need format of tab delimited file

    I am trying to import addresses from my G3 PB Mozilla App. I exported addresses on G3 to both ldif and tab-delimited files and then attached to an email message which I sent to myself. I received email on Mac mini, but the 2 files showed up inline, not as attachments. I copied the ldif portion to a text file; ditto for tab-delimited. I tried importing into Address Book, but got message that format was not valid.
    Can't I just edit the tab-delimited file in TextEdit, save as RTF, and get Address Book to recognize the data?
    Thanks,
    Owen

    Hi, Did you get solution for this? Please let me know as I'm looking for the same solution. The Bank requirement is to generate a Tab delimited file but the RFFOGB_T with Format GB_BACS issues the output as below...
    Required by Bank:
    692532 73855963 RRS P R BACKLEY         169.91 GSLV
    294518 99855581 CETS PRITECTIIN         799.72 GSLV
    The output I get from SAP/DMEE:
    ........1........2........3........4........5........6....
    VOL1000004                           ....953312
                  1 <CR/LF>
    HDR1A953312S  195331200000400010001       10040 100420000000
                    <CR/LF>
    HDR2F0200000100                                   00
                    <CR/LF>
    UHL1 10041999999    000000001 DAILY  000
                    <CR/LF>
    6010392865540009960062063474662    00000115000ABC UK Ltd.       0
    1465                                <CR/LF>
    6006206347466201760062063474662    00000115000SAPBACS0000003306 C
    ONTRA            ABC UK LTD.        <CR/LF>
    EOF1A953312S  195331200000400010001       10040 100420000000
                    <CR/LF>
    EOF2F0200000100                                   00
                    <CR/LF>
    UTL10000000115000000000011500000000010000001
                    <CR/LF>
    END
                    <CR/LF>
    <END>
    Rgds,
    Stan

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • UPLOADING tab delimited file onto FTP server

    Hello all
    Can i upload a tab delimited file onto the FTP server. If yes then how
    points guranteed if answered!!

    Hi,
    Yes you can do this one .. you can have a look at the standard program 'RSEPSFTP'.
    REPORT ZFTPSAP LINE-SIZE 132.
    DATA: BEGIN OF MTAB_DATA OCCURS 0,
    LINE(132) TYPE C,
    END OF MTAB_DATA.
    DATA: MC_PASSWORD(20) TYPE C,
    MI_KEY TYPE I VALUE 26101957,
    MI_PWD_LEN TYPE I,
    MI_HANDLE TYPE I.
    START-OF-SELECTION.
    *-- Your SAP-UNIX FTP password (case sensitive)
    MC_PASSWORD = 'password'.
    DESCRIBE FIELD MC_PASSWORD LENGTH MI_PWD_LEN.
    *-- FTP_CONNECT requires an encrypted password to work
    CALL 'AB_RFC_X_SCRAMBLE_STRING'
         ID 'SOURCE' FIELD MC_PASSWORD ID 'KEY' FIELD MI_KEY
         ID 'SCR' FIELD 'X' ID 'DESTINATION' FIELD MC_PASSWORD
         ID 'DSTLEN' FIELD MI_PWD_LEN.
    CALL FUNCTION 'FTP_CONNECT'
         EXPORTING
    *-- Your SAP-UNIX FTP user name (case sensitive)
           USER            = 'userid'
           PASSWORD        = MC_PASSWORD
    *-- Your SAP-UNIX server host name (case sensitive)
           HOST            = 'unix-host'
           RFC_DESTINATION = 'SAPFTP'
         IMPORTING
           HANDLE          = MI_HANDLE
         EXCEPTIONS
           NOT_CONNECTED   = 1
           OTHERS          = 2.
    CHECK SY-SUBRC = 0.
    CALL FUNCTION 'FTP_COMMAND'
         EXPORTING
           HANDLE = MI_HANDLE
           COMMAND = 'dir'
         TABLES
           DATA = MTAB_DATA
         EXCEPTIONS
           TCPIP_ERROR = 1
           COMMAND_ERROR = 2
           DATA_ERROR = 3
           OTHERS = 4.
    IF SY-SUBRC = 0.
      LOOP AT MTAB_DATA.
        WRITE: / MTAB_DATA.
      ENDLOOP.
    ELSE.
    * do some error checking.
      WRITE: / 'Error in FTP Command'.
    ENDIF.
    CALL FUNCTION 'FTP_DISCONNECT'
         EXPORTING
           HANDLE = MI_HANDLE
         EXCEPTIONS
           OTHERS = 1.  
    Regards
    Sudheer

  • Parse tab delimited file after upload

    I could use some advice. I have a requirement to parse and insert an uploaded, tab delimited file using APEX. Directly off of the file system, I would do this with an external table, but here I am uploading the file into a CLOB field. Does PL/SQL have any native functions to parse delimited files?
    The user will not have access to the data load part of APEX, although I have seen numerous requests on this forum for that functionality to be added to the user GUI. Thoughts?

    j,
    I wrote this a while ago...
    http://www.danielmcghan.us/2009/02/easy-csv-uploads-yes-we-can.html
    I've since improved the code a little bit. If you like the solution, I could do a new post with the latest code.
    Regards,
    Dan
    Blog: http://DanielMcGhan.us/
    Work: http://SkillBuilders.com/

  • Tab delimited file as target

    I am using OWB to read data from a bunch of oracle tables and putting the results in a flat file. I've defined is as a tab delimited file and everything works fine except for one thing. What I need is to include double quotes for only those fields that are of character type. Currently, all the fields are being enclosed with double quotes. Where can I specify double quotes as enclosures for character fields only? Any feedback would be greatly appreciated.
    Thanks.

    the best option that i would suggest is:
    create a staging table with columns datatype as varchar2 or have one single large column with carchar2(4000)
    put all values from the source table to the staging table by concatenating all the columns
    have a logic to add " " to the required fields, this can be easily done using expression builder by concatenating the "" before and end of the string
    export the table as tab delimited so at the end of it you will ahve the required fields enclosed by strings.

  • How can I save a Numbers spreadsheet as a tab delimited file?

    Okay, I would like to save a Numbers spreadsheet as a tab delimited file. But of course I can't do that, since Numbers only saves as a comma separated values file.
    Plan B: I can open up a CSV file created by Numbers in TextEdit, and I would like to Find and Replace all the commas with tabs. So I open up the Find window, enter a comma under Find, but of course when I put the cursor in the Replace blank and hit the Tab key, it just goes to the next blank instead of giving me a Tab.
    Is there a way I can get a Tab into that blank so I can replace all the commas with tabs?
    --Dave

    Hello
    When we use TextEdit as I described, rows are separated by Returns.
    It appears that your program requires LineFeeds.
    Here is a neat soluce.
    Save this script as an Application Bundle.
    --\[SCRIPT tsv2csv]
    (* copy to the clipboard a block of datas separated by tabs
    run the script
    You will get on the desktop a csv file named from the current date-time.
    The values separator is set to match the Bento's requirements:
    If the decimal separator is period, the script uses commas.
    If the decimal separator is comma, the script uses semi-colons.
    This may be changed thru the setting of the property maybeSemiColon.
    According to the property needLF,
    embedded Returns will be replaced by LineFeeds
    or
    embedded LineFeeds will be replaced by Returns
    Yvan KOENIG (Vallauris, FRANCE)
    le 10 octobre 2008
    property needLF : true
    (* true = replace Returns by LineFeeds
    false = replace LineFeeds by Returns *)
    property maybeSemicolon : true
    (* true = use semiColons if decimal separator is comma
    false = always uses commas *)
    --=====
    try
    set |données| to the clipboard as text
    on error
    error "the clipboard doesn't contain text datas !"
    end try
    set line_feed to ASCII character 10
    if |données| contains tab then
    if maybeSemicolon then
    if character 2 of (0.5 as text) is "," then
    set |données| to my remplace(|données|, tab, quote & ";" & quote)
    else
    set |données| to my remplace(|données|, tab, quote & "," & quote)
    end if -- character 2 of (0.5…
    else
    set |données| to my remplace(|données|, tab, quote & "," & quote)
    end if -- maybeSemiColon
    end if -- |données| contains tab
    if needLF then
    if |données| contains return then set |données| to my remplace(|données|, return, line_feed)
    set |données| to my remplace(|données|, line_feed, quote & line_feed & quote)
    else
    if |données| contains line_feed then set |données| to my remplace(|données|, line_feed, return)
    |données| to my remplace(|données|, return, quote & return & quote)
    end if -- needLF
    set |données| to quote & |données| & quote
    set p2d to path to desktop as text
    set nom to my remplace(((current date) as text) & ".csv", ":", "-") (* unique name *)
    tell application "System Events" to make new file at end of folder p2d with properties {name:nom}
    write |données| to file (p2d & nom)
    --=====
    on remplace(t, tid1, tid2)
    local l
    set AppleScript's text item delimiters to tid1
    set l to text items of t
    set AppleScript's text item delimiters to tid2
    set t to l as text
    set AppleScript's text item delimiters to ""
    return t
    end remplace
    --=====
    --[/SCRIPT]
    Yvan KOENIG (from FRANCE vendredi 10 octobre 2008 13:47:36)

  • Trouble importing contacts from a tab-delimited file into Contacts on 10.9.5

    I am having Trouble importing contacts from a tab-delimited file into Contacts on MacBook Pro, OS 10.9.5, Intel 2.4 GHz Core 2 Duo., 400GB drive, 4GB memory DDR3
    So far I have:
    - Followed closely the help screen in Contacts app
    - Modified my source document meticulously - first in MS Word, then copied and tweaked in Apple's "Text Edit" app
    - The problem arises when I go to access the document for import.  Specifically, the target document, and all others in that file, are "greyed out" and thus can't be "opened" to facilitate the import process
    - Tried changing the extension of the document name to ".txt", ".rtf", and ".rtd". No change or improvement.
    - Searched Apple.com/help and found nothing relevant
    Can anyone offer some advice or tell me what I may be overlooking in this process?
    Any help will be greatly appreciated!
    Thanks,
    <Email Edited By Host>

    Hi Rammohan,
    Thanks for the effort!
    But I don't need to use GUI upload because my functionality does not require to fetch data from presentation server.
    Moreover, the split command advised by you contains separate fields...f1, f2, f3... and I cannot use it because I have 164 fields.  I will have to split into 164 fields and assign the values back to 164 fields in the work area/header line.
    Moreover I have about 10 such work areas.  so the effort would be ten times the above effort! I want to avoid this! Please help!
    I would be very grateful if you could provide an alternative solution.
    Thanks once again,
    Best Regards,
    Vinod.V

  • HT2486 The selected file does not appear to be a valid comma separated values (csv) file or a valid tab delimited file. Choose a different file.

    The selected file does not appear to be a valid comma separated values (csv) file or a valid tab delimited file. Choose a different file.

    I guess your question is, "what's wrong with the file?"
    You're going to have to figure that out yourself, as we cannot see the file.
    Importing into Address book requires either a tab-delimited or a comma-delimited file. You can export out of most spreadsheets into a csv file. However, you need to make sure you clean up the file first.  If you have a field that has commas in the field, they will create new fields at the comma. So, some lines will have more fields than the others, causing issues like the error you saw.

  • Upload tab-delimited file from the application server to an internal table

    Hello SAPients.
    I'm using OPEN DATASET..., READ DATASET..., CLOSE DATASET to upload a file from the application server (SunOS). I'm working with SAP 4.6C. I'm trying to upload a tab-delimited file to an internal table but when I try load it the fields are not correctly separated, in fact, they are all misplaced and the table shows '#' where supposedly there was a tab.
    I tried to SPLIT the line using as separator a variable with reference to CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB but for some reason that class doesn't exist in my system.
    Do you know what I'm doing wrong? or Do you know a better method to upload a tab-delimited file into an internal table?
    Thank you in advance for your help.

    Try:
    REPORT ztest MESSAGE-ID 00.
    PARAMETER: p_file LIKE rlgrap-filename   OBLIGATORY.
    DATA: BEGIN OF data_tab OCCURS 0,
          data(4096),
          END   OF data_tab.
    DATA: BEGIN OF vendor_file_x OCCURS 0.
    * LFA1 Data
    DATA: mandt  LIKE bgr00-mandt,
          lifnr  LIKE blf00-lifnr,
          anred  LIKE blfa1-anred,
          bahns  LIKE blfa1-bahns,
          bbbnr  LIKE blfa1-bbbnr,
          bbsnr  LIKE blfa1-bbsnr,
          begru  LIKE blfa1-begru,
          brsch  LIKE blfa1-brsch,
          bubkz  LIKE blfa1-bubkz,
          datlt  LIKE blfa1-datlt,
          dtams  LIKE blfa1-dtams,
          dtaws  LIKE blfa1-dtaws,
          erdat  LIKE  lfa1-erdat,
          ernam  LIKE  lfa1-ernam,
          esrnr  LIKE blfa1-esrnr,
          konzs  LIKE blfa1-konzs,
          ktokk  LIKE  lfa1-ktokk,
          kunnr  LIKE blfa1-kunnr,
          land1  LIKE blfa1-land1,
          lnrza  LIKE blfa1-lnrza,
          loevm  LIKE blfa1-loevm,
          name1  LIKE blfa1-name1,
          name2  LIKE blfa1-name2,
          name3  LIKE blfa1-name3,
          name4  LIKE blfa1-name4,
          ort01  LIKE blfa1-ort01,
          ort02  LIKE blfa1-ort02,
          pfach  LIKE blfa1-pfach,
          pstl2  LIKE blfa1-pstl2,
          pstlz  LIKE blfa1-pstlz,
          regio  LIKE blfa1-regio,
          sortl  LIKE blfa1-sortl,
          sperr  LIKE blfa1-sperr,
          sperm  LIKE blfa1-sperm,
          spras  LIKE blfa1-spras,
          stcd1  LIKE blfa1-stcd1,
          stcd2  LIKE blfa1-stcd2,
          stkza  LIKE blfa1-stkza,
          stkzu  LIKE blfa1-stkzu,
          stras  LIKE blfa1-stras,
          telbx  LIKE blfa1-telbx,
          telf1  LIKE blfa1-telf1,
          telf2  LIKE blfa1-telf2,
          telfx  LIKE blfa1-telfx,
          teltx  LIKE blfa1-teltx,
          telx1  LIKE blfa1-telx1,
          xcpdk  LIKE  lfa1-xcpdk,
          xzemp  LIKE blfa1-xzemp,
          vbund  LIKE blfa1-vbund,
          fiskn  LIKE blfa1-fiskn,
          stceg  LIKE blfa1-stceg,
          stkzn  LIKE blfa1-stkzn,
          sperq  LIKE blfa1-sperq,
          adrnr  LIKE  lfa1-adrnr,
          mcod1  LIKE  lfa1-mcod1,
          mcod2  LIKE  lfa1-mcod2,
          mcod3  LIKE  lfa1-mcod3,
          gbort  LIKE blfa1-gbort,
          gbdat  LIKE blfa1-gbdat,
          sexkz  LIKE blfa1-sexkz,
          kraus  LIKE blfa1-kraus,
          revdb  LIKE blfa1-revdb,
          qssys  LIKE blfa1-qssys,
          ktock  LIKE blfa1-ktock,
          pfort  LIKE blfa1-pfort,
          werks  LIKE blfa1-werks,
          ltsna  LIKE blfa1-ltsna,
          werkr  LIKE blfa1-werkr,
          plkal  LIKE  lfa1-plkal,
          duefl  LIKE  lfa1-duefl,
          txjcd  LIKE blfa1-txjcd,
          sperz  LIKE  lfa1-sperz,
          scacd  LIKE blfa1-scacd,
          sfrgr  LIKE blfa1-sfrgr,
          lzone  LIKE blfa1-lzone,
          xlfza  LIKE  lfa1-xlfza,
          dlgrp  LIKE blfa1-dlgrp,
          fityp  LIKE blfa1-fityp,
          stcdt  LIKE blfa1-stcdt,
          regss  LIKE blfa1-regss,
          actss  LIKE blfa1-actss,
          stcd3  LIKE blfa1-stcd3,
          stcd4  LIKE blfa1-stcd4,
          ipisp  LIKE blfa1-ipisp,
          taxbs  LIKE blfa1-taxbs,
          profs  LIKE blfa1-profs,
          stgdl  LIKE blfa1-stgdl,
          emnfr  LIKE blfa1-emnfr,
          lfurl  LIKE blfa1-lfurl,
          j_1kfrepre  LIKE blfa1-j_1kfrepre,
          j_1kftbus   LIKE blfa1-j_1kftbus,
          j_1kftind   LIKE blfa1-j_1kftind,
          confs  LIKE  lfa1-confs,
          updat  LIKE  lfa1-updat,
          uptim  LIKE  lfa1-uptim,
          nodel  LIKE blfa1-nodel.
    DATA: END   OF vendor_file_x.
    FIELD-SYMBOLS:  <field>,
                    <field_1>.
    DATA: delim          TYPE x        VALUE '09'.
    DATA: fld_chk(4096),
          last_char,
          quote_1     TYPE i,
          quote_2     TYPE i,
          fld_lth     TYPE i,
          columns     TYPE i,
          field_end   TYPE i,
          outp_rec    TYPE i,
          extras(3)   TYPE c        VALUE '.,"',
          mixed_no(14) TYPE c        VALUE '1234567890-.,"'.
    OPEN DATASET p_file FOR INPUT.
    DO.
      READ DATASET p_file INTO data_tab-data.
      IF sy-subrc = 0.
        APPEND data_tab.
      ELSE.
        EXIT.
      ENDIF.
    ENDDO.
    * count columns in output structure
    DO.
      ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      columns = sy-index.
    ENDDO.
    * Assign elements of input file to internal table
    CLEAR vendor_file_x.
    IF columns > 0.
      LOOP AT data_tab.
        DO columns TIMES.
          ASSIGN space TO <field>.
          ASSIGN space TO <field_1>.
          ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
          SEARCH data_tab-data FOR delim.
          IF sy-fdpos > 0.
            field_end = sy-fdpos + 1.
            ASSIGN data_tab-data(sy-fdpos) TO <field_1>.
    * Check that numeric fields don't contain any embedded " or ,
            IF <field_1> CO mixed_no AND
               <field_1> CA extras.
              TRANSLATE <field_1> USING '" , '.
              CONDENSE <field_1> NO-GAPS.
            ENDIF.
    * If first and last characters are '"', remove both.
            fld_chk = <field_1>.
            IF NOT fld_chk IS INITIAL.
              fld_lth = strlen( fld_chk ) - 1.
              MOVE fld_chk+fld_lth(1) TO last_char.
              IF fld_chk(1) = '"' AND
                 last_char = '"'.
                MOVE space TO fld_chk+fld_lth(1).
                SHIFT fld_chk.
                MOVE fld_chk TO <field_1>.
              ENDIF.       " for if fld_chk(1)=" & last_char="
            ENDIF.         " for if not fld_chk is initial
    * Replace "" with "
            DO.
              IF fld_chk CS '""'.
                quote_1 = sy-fdpos.
                quote_2 = sy-fdpos + 1.
                MOVE fld_chk+quote_2 TO fld_chk+quote_1.
              ELSE.
                MOVE fld_chk TO <field_1>.
                EXIT.
              ENDIF.
            ENDDO.
            <field> = <field_1>.
          ELSE.
            field_end = 1.
          ENDIF.
          SHIFT data_tab-data LEFT BY field_end PLACES.
        ENDDO.
        APPEND vendor_file_x.
        CLEAR vendor_file_x.
      ENDLOOP.
    ENDIF.
    CLEAR   data_tab.
    REFRESH data_tab.
    FREE    data_tab.
    Rob

  • Read Tab delimited File from Application server

    Hi Experts,
    I am facing problem while reading file from Application server.
    File in Application server is stored as follows, The below file is a tab delimited file.
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    i have downloaded this file from Application server using Transaction CG3Y. the Downloaded file is a tab delimited file and i could not see "#' in the file,
    The code is as Below.
    c_split  TYPE abap_char1 VALUE cl_abap_char_utilities=>horizontal_tab.
    here i am using IGNORING CONVERSION ERRORS in order to avoid Conversion Error Short Dump.
    OPEN DATASET wa_filename-file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS.
          IF sy-subrc = 0.
            WRITE : /,'...Processing file - ', wa_filename-file.   
           DO.
          Read the contents of file
              READ DATASET wa_filename-file INTO wa_file-data.
              IF sy-subrc = 0.
                SPLIT wa_file-data AT c_split INTO wa_adrc_2-kunnr
                                                   wa_adrc_2-title
                                                   wa_adrc_2-name1
                                                   wa_adrc_2-name2
                                                   wa_adrc_2-name3
                                                   wa_adrc_2-name4
                                                   wa_adrc_2-name_co
                                                   wa_adrc_2-city1
                                                   wa_adrc_2-city2
                                                   wa_adrc_2-regiogroup
                                                   wa_adrc_2-post_code1
                                                   wa_adrc_2-post_code2
                                                   wa_adrc_2-po_box
                                                   wa_adrc_2-po_box_loc
                                                   wa_adrc_2-transpzone
                                                   wa_adrc_2-street
                                                   wa_adrc_2-house_num1
                                                   wa_adrc_2-house_num2
                                                   wa_adrc_2-str_suppl1
                                                   wa_adrc_2-str_suppl2
                                                   wa_adrc_2-country
                                                   wa_adrc_2-langu
                                                   wa_adrc_2-region
                                                   wa_adrc_2-sort1
                                                   wa_adrc_2-sort2
                                                   wa_adrc_2-deflt_comm
                                                   wa_adrc_2-tel_number
                                                   wa_adrc_2-tel_extens
                                                   wa_adrc_2-fax_number
                                                   wa_adrc_2-fax_extens
                                                   wa_adrc_2-taxjurcode.
    WA_FILE-DATA is having below values
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    And this is split by tab delimited and moved to other variables as shown above.
    Please guide me how to read the contents without "#' from the file.
    I have tried all possible ways and unable to get solution.
    Thanks,
    Shrikanth

    Hi ,
    In ECC 6 if all the unicode patches are applied then UTF 16 will defintly work..
    More over i would suggest you to ist replace # with some other  * or , and then try to see in debugging if any further  # appears..
    and no # appears then try to split now.
    if even now the # appears after replace statement then try to find out what exactly is it... wheather it is a horizantal tab etc....
    and then again try to replace it and then split..
    Please follow the process untill all the # are replaced...
    This should work for you..
    Let me know if you further face any issue...
    Regards
    Satish Boguda

  • GUI_Download problem in tab delimited file

    Hi,
    I am trying to download tab delimited file using F.M GUI_DOWNLOAD.We have
    4.6C version.I have declared internal table with 5 columns.The file created using F.M is very strange.Column 1 and 2 and 3 and 4 are seaprated by tab but column 2
    and 3  and column 4 and 5 are not seaprated by tab but have fixed length.
    I am not getting where I am doing mistake.Any help is appericiated.
    Below is the structure of internal table.
    DATA:BEGIN OF IT_FINAL OCCURS 0,
           Company(3) type c,
           County(30) type c,
           FId(35) type c,
           Wellno(15) type c,
           Welldesc(35) type c,
         END OF IT_FINAL.
    Data populated in table it_final.
    it_final-Company = '300'.
    it_final-County = 'Greenwood'.
    it_final-fid = 'testts'.
    it_final-wellno = '10000000'.
    it_final-welldesc = 'tebsbvd'.
    append it_final.
    F.M GUI_DOWNLOAD
    CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
        BIN_FILESIZE                  =
          filename                      = l_path
        FILETYPE                      = 'ASC'
        APPEND                        = ' '
          write_field_separator         = 'X'
        HEADER                        = '00'
        TRUNC_TRAILING_BLANKS         = ' '
        WRITE_LF                      = 'X'
        COL_SELECT                    = ' '
        COL_SELECT_MASK               = ' '
        DAT_MODE                      = ' '
      IMPORTING
        FILELENGTH                    =
        TABLES
          data_tab                      = it_final
         EXCEPTIONS
         file_write_error              = 1
         no_batch                      = 2
         gui_refuse_filetransfer       = 3
         invalid_type                  = 4
         no_authority                  = 5
         unknown_error                 = 6
         header_not_allowed            = 7
         separator_not_allowed         = 8
         filesize_not_allowed          = 9
         header_too_long               = 10
         dp_error_create               = 11
         dp_error_send                 = 12
         dp_error_write                = 13
         unknown_dp_error              = 14
         access_denied                 = 15
         dp_out_of_memory              = 16
         disk_full                     = 17
         dp_timeout                    = 18
         file_not_found                = 19
         dataprovider_exception        = 20
         control_flush_error           = 21
         OTHERS                        = 22
    Thanks,
    Rekha.

    Hi,
    i think it is because tou need to uncomment:
         FILETYPE                      = 'ASC'
    try it.
    Best regards.

  • Tab Delimited File?

    Hi All,
             I am trying to create a tab delimited file using Sender File adapter using <b>File content Conversion in Adapter Module</b>. I want a constant space beteween the fields of length 8 spaces. When i use '        ' as field seperater , the file that is generated is <b>19'     'Stringtest14,Hiring'     '09/03/2007'     'Managing Dir, Operations</b>. I want the file without colons i.e <b>19     Stringtest14,Hiring'     09/03/2007     Managing Dir, Operations</b>.
    Please Advice...
    Regards,
    XIer

    Hi Xler
    <b>content conversion parameters  ,, just like</b>
    Recordset structure = DT_PRY,*
    DT_PRY.fieldnames = ID,empname,empno,country
    DT_PRY.fieldSeparator = '0x09' (tab delimited)
    DT_PRY.endSeparator = 'nl'
    <b>
    The FCC...</b>
    Document Name : MT_PRY
    Document Namespace : http://product.com/prvsoup
    Recordset Name : recordset
    Recordset structure = DT_PRY,*
    DT_PRY.fieldnames = ID,empname,empno,country
    DT_PRY.fieldSeparator = '0x09'
    DT_PRY.endSeparator = 'nl'
    ignoreRecordsetName = true
    http://help.sap.com/saphelp_nw04/helpdata/en/14/80243b4a66ae0ce10000000a11402f/frameset.htm
    <b>can take the help of this blog</b>
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    <b>Conversion agent</b>
    /people/william.li/blog/2006/03/17/how-to-get-started-using-conversion-agent-from-itemfield
    /people/paul.medaille/blog/2005/11/18/conversion-agent-a-free-lunch
    /people/alexander.bundschuh/blog/2006/03/14/integrate-sap-conversion-agent-by-itemfield-with-sap-xi
    /people/paul.medaille/blog/2005/11/17/more-on-the-sap-conversion-agent-by-itemfield
    Thanks!
    Questions are welcome here!!
    <b>Also mark helpful answers by rewarding points</b>
    Regards
    Abhishek Agrahari

  • Import text into a form from a tab delimited file using an action

    Good evening.
    I am using Adobe Acrobat XI Pro
    I have been working all weekend so far on this, and have tried many options.
    What I am wanting is to import a line of data from an excel file into a form, save the file with a name drived from the form fields, close the new file, then go back to the original file, import a line of data, and so on and so on.
    I had all of this working in livecyle with a tool, but now that we are going to Windows 8 and Microsoft Office 2013, the driver that I need to load the excel file into the form is not available--so now I have to redo the forms in Adobe and import using a tab delimited file.
    I would like to do this with an action.
    Thus far, I have the following.
    **Credit goes to George Johnson who helped me with the script to save the file using field names a few months ago**
    To start, I have a folder level script
    mySaveAs = app.trustPropagatorFunction(function(doc,path)
    app.beginPriv();
    var myDoc = event.target;
    myDoc.saveAs(path);
    app.endPriv();
    myTrustedSpecialTaskFunc = app.trustedFunction(function(doc,path)
    //privileged and/or non-privileged code above
    app.beginPriv();
    mySaveAs(doc, path);
    app.endPriv
    //Privileged and/or non-privileged code below
    For the Action, I start with the Form file loaded
    Then I execute a javascript to import the text into the file.
    this.importTextData("/c/data/clerical.txt");
    The next step is to save the file using field names as part of the file name
    --credit goes to George Johnson who assisted me with this on a similar project a few months ago.
    //Get the Field Value
    var fn=getField("Last").valueAsString + "-" + getField("First").valueAsString;
    //Specify the folder
    var fldr = "/c/data/";
    //Determine the full path
    var fp=fldr + fn + ".pdf";
    //save the file
    myTrustedSpecialTaskFunc(this, fp);
    That is as far as I have gotten. 
    What I want to do now is load the original form file, import the next line into the form, save the file, repeat.
    If anyone could assist me, I would greatly appreciate it.
    **note  I had to type this from my I-pad because I kept getting kicked off the internet from my computer.  I hope my code is typed correctly.

    George
    Thank you so much for your response.
    Yes--there are multiple lines in the data file--it will vary from month to month.  It looks like from what you are saying is that I need to do the global variable since it will constantly change, and then increment it as it goes through the text file.  When it reaches the end, then it will stop.
    I looked at the following links--but this is from Adobe 9 API--I don't know if things have changed with XI--especially since 9 used batch processing and XI has actions.
    Count PDF Files
    http://help.adobe.com/livedocs/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/wwhelp/wwhimpl/js/html/ wwhelp.htm?href=JS_API_AcroJS.88.1.html
    Global counter
    http://help.adobe.com/livedocs/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/wwhelp/wwhimpl/js/html/ wwhelp.htm?href=JS_API_AcroJS.88.1.html
    I am not exactly sure how to include this in my code.
    If you could point me in the right direction, I would appreciate it.

  • How to convert Mail attachment file Tab Delimited file to XML.

    Hi PI Experts
        I have XI scenario: MAIL  XI SAP (ABAP Proxy), the process is
    1.     XI will receive tab delimited file as attachment in mail.
    2.     XI will convert the tab delimited file into XML, then map to the target structure.
    3.     Target will be posted into ECC through ABAP Proxy.
    Can anyone help me how can I convert the attachment file (i.e. Tab delimited file) to XML while configuring the sender side.
    Thanks in Advance

    I just grabed what I answered from another thread. It should work.
    Processing sequence as follows:
    1. localejbs/AF_Modules/PayloadSwapBean Local Enterprise Bean swap1
    2. localejbs/AF_Modules/MessageTransformBean Local Enterprise Bean tran1
    3. sap.com/com.sap.aii.adapter.mail.app/XIMailAdapterBean Local Enterprise Bean mail
    Module Configuration as follows: Fill the XXXX part with your info.
    swap1 swap.keyName payload-name
    swap1 swap.keyValue MailAttachment-1
    tran1 Transform.Class com.sap.aii.messaging.adapter.Conversion
    tran1 Transform.ContentType ext/xml;charset=utf-8
    tran1 xml.conversionType SimplePlain2XML
    tran1 xml.documentName XXXXXXX_Mail
    tran1 xml.documentNamespace http://XXXXX.com.au/XXXX
    tran1 xml.fieldSeparator \t
    tran1 xml.processFieldNames fromConfiguration
    tran1 xml.structureTitle rows
    Once you set up the above configuration, you will get one record at a time.
    Create a souce message interface like the followings:
    XXXXXXXXX_Mail
    rows
    record
    Target message interface:
    XXXXXXXXX
    rows
    field 1
    field 2
    field 3
    Write a UDF function to remove the TAB space
    public removeTABSpace(String record,Container container){
    //write your code here
    StringTokenizer st = new StringTokenizer(record,"\t",false);
    String t="";
    while (st.hasMoreElements()) t += st.nextElement();
    return t;
    Write another UDF to get the field 1 for example:
    public String getField1(String input,Container container){
    int counter=0;
    int beginIndex=0;
    int endIndex=0;
    int i;
    for (i=0;i<input.length();i++){
    if (input.charAt(i)==34){
    counter=counter+1;
    if (counter==1){
    beginIndex=i+1;
    counter=0;
    break;
    for (i=0;i<input.length();i++){
    if (input.charAt(i)==34){
    counter=counter+1;
    if (counter==2){
    endIndex=i;
    counter=0;
    break;
    input=input.substring(beginIndex,endIndex);
    return input;
    Get the mapping like the followings:
    record - removeTABSpace - getField1 - field 1
    If you need to get field 2, you will need to write another UDF similar to the above one to handle it.

Maybe you are looking for

  • I tunes and comp not reading my ipod

    All of a sudden my ipod is not being read by comp or i tunes. Dont think its my drive cause it reads hubbys ipod. Ipod charges but doesnt show up. what do i do. Thanks Cin

  • Call RFC for Lookup from BPM. how to determin the correct target system

    Hi I have a scenario where i am sending a sync call to SAP R/3 to fetch some sata using the BPM. The issue is I can have multiple SAP systems so to determin the SAP system for the correct lookup is necessary. in the rec determination step of Config I

  • ISO code 0 is not correct in the VAT registration number

    Hello everyone! Where can I adjust this error message so that I can make it as a Warning only? Thank you. Message No. AR191   --  ISO code 0 is not correct in the VAT registration number

  • Truble with update

    I updated to version 7.0.6. I know can not open adobe pdf's. I was told to reboot device. I did this, it seemed to fix the problem last night. But now it is doing it again. I tried to reboot again and it still won't work. What should I do next?

  • Equivalent keyword in Oracle for QUALIFY of Teradata

    Hi, In teradata database we can write a query like below : select sls_doc_id, sls_dlvr_doc_id, rank() over (partition by sls_dlvr_doc_id order by sls_dlvr_doc_id) rn from cdp_analysis.v_sls_dlvr_doc_dtl QUALIFY rn > 1; If I want to write an equivalen