Tab delimited file on app server. How to do that?

Hello,
When i transfer a tab delimited file from pre. server(windows) to app. server(windows) using CG3Z , the tab delimitation  is not working.
for eg : the file with below layout
01.04.2007     31.03.2008     1     120     
01.05.2007     31.07.2008     2     140     
is getting changed like
01.04.2007#31.03.2008#1#120     
01.05.2007#31.07.2008#2#140     
All i need is a tab delimited file in app server also . Wats that i need to do for this ? Also how can i write a tab delimited file on app server through my program using open dataset.
Thanks for ur time.
Jeeva.

Hi..
Check this code: you will find the solution:
Using the Static Attribute <b>CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB</b>
Example:
This is the Simple way you can download the ITAB with Tab delimiter:
DATA : V_REC(200).
OPEN DATASET P_FILE FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
LOOP AT ITAB INTO WA.
  Concatenate WA-FIELD1 WA-FIELD2
         INTO V_REC
         SEPARATED BY CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB.
  TRANSFER V_REC TO P_FILE.
ENDLOOP.
CLOSE DATASET P_FILE.
Note: if there are any Numeric fields ( Type I, P, F) In your ITAB then before CONCATENATE you have to Move them to Char fields ..
Reward if Helpful.
Example:
DATA: V_RECORD(200).
OPEN DATASET P_FILE FOR INPUT IN TEXT MODE ENCODING DEFAULT.
IF SY-SUBRC <> 0.
  EXIT.
ENDIF.
DO.
READ DATASET P_FILE INTO V_RECORD.
  IF SY-SUBRC NE 0.
    EXIT.
  ENDIF.
SPLIT V_Record at CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB
                            INTO WA-FIELD1 WA-FIELD2.
APPEND WA TO ITAB.
ENDDO.
<b>reward if Helpful.</b>

Similar Messages

  • Generating tab delimited file on presentation server

    Hi All,
    I have to generate a tab delimited file on presentation server. The file shouldn't have '#' as separator. Instead it should be separated by TAB (when you press TAB button, some spaces will come naa... like that) ' '. Please advice me how to do this...
    Thanks in Advance,
    Regards,
    Phani

    Hi,
    Check this example..
    DATA: BEGIN OF itab OCCURS 0,
    matnr TYPE matnr,
    werks TYPE werks_d,
    END OF itab.
    itab-matnr = 'ABC'.
    itab-werks = 'ASDF'.
    APPEND itab.
    itab-matnr = 'EFG'.
    itab-werks = 'DHIS'.
    APPEND itab.
    CALL FUNCTION 'GUI_DOWNLOAD'
    EXPORTING
    filename = 'C:\test.txt'
    write_field_separator = 'X'
    TABLES
    data_tab = itab
    EXCEPTIONS
    file_write_error = 1
    no_batch = 2
    gui_refuse_filetransfer = 3
    invalid_type = 4
    no_authority = 5
    unknown_error = 6
    header_not_allowed = 7
    separator_not_allowed = 8
    filesize_not_allowed = 9
    header_too_long = 10
    dp_error_create = 11
    dp_error_send = 12
    dp_error_write = 13
    unknown_dp_error = 14
    access_denied = 15
    dp_out_of_memory = 16
    disk_full = 17
    dp_timeout = 18
    file_not_found = 19
    dataprovider_exception = 20
    control_flush_error = 21
    OTHERS = 22.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ELSE.
    MESSAGE s208(00) WITH 'File downloaded'.
    ENDIF.
    Thanks,
    Naren

  • How to extract a tab delimiter file on appli server in openhub destination

    Hi All,
    in my scenario i want download a file on application server in tab delimiter file where iam not able to do it.while creating  OHD iam selecting seperator feild has ';'(semicolan).tab delimiter fule format is not posible with semicolan and even i tried it with "comma" also but not getting the tab delimiter formate.please help how to get that particular format..
    thanks
    vamshi..

    Hi Dashyam,
    May help you:----
    with comma delimiter file read
    I want to download comma delimiter file on to presentation server from inte
    http://wiki.sdn.sap.com/wiki/display/ABAP/UploadaCommaDelimitedCSVfilethatcontainscommasindata+fields
    Regards,
    Suman

  • Read Tab delimited File from Application server

    Hi Experts,
    I am facing problem while reading file from Application server.
    File in Application server is stored as follows, The below file is a tab delimited file.
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    i have downloaded this file from Application server using Transaction CG3Y. the Downloaded file is a tab delimited file and i could not see "#' in the file,
    The code is as Below.
    c_split  TYPE abap_char1 VALUE cl_abap_char_utilities=>horizontal_tab.
    here i am using IGNORING CONVERSION ERRORS in order to avoid Conversion Error Short Dump.
    OPEN DATASET wa_filename-file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS.
          IF sy-subrc = 0.
            WRITE : /,'...Processing file - ', wa_filename-file.   
           DO.
          Read the contents of file
              READ DATASET wa_filename-file INTO wa_file-data.
              IF sy-subrc = 0.
                SPLIT wa_file-data AT c_split INTO wa_adrc_2-kunnr
                                                   wa_adrc_2-title
                                                   wa_adrc_2-name1
                                                   wa_adrc_2-name2
                                                   wa_adrc_2-name3
                                                   wa_adrc_2-name4
                                                   wa_adrc_2-name_co
                                                   wa_adrc_2-city1
                                                   wa_adrc_2-city2
                                                   wa_adrc_2-regiogroup
                                                   wa_adrc_2-post_code1
                                                   wa_adrc_2-post_code2
                                                   wa_adrc_2-po_box
                                                   wa_adrc_2-po_box_loc
                                                   wa_adrc_2-transpzone
                                                   wa_adrc_2-street
                                                   wa_adrc_2-house_num1
                                                   wa_adrc_2-house_num2
                                                   wa_adrc_2-str_suppl1
                                                   wa_adrc_2-str_suppl2
                                                   wa_adrc_2-country
                                                   wa_adrc_2-langu
                                                   wa_adrc_2-region
                                                   wa_adrc_2-sort1
                                                   wa_adrc_2-sort2
                                                   wa_adrc_2-deflt_comm
                                                   wa_adrc_2-tel_number
                                                   wa_adrc_2-tel_extens
                                                   wa_adrc_2-fax_number
                                                   wa_adrc_2-fax_extens
                                                   wa_adrc_2-taxjurcode.
    WA_FILE-DATA is having below values
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    And this is split by tab delimited and moved to other variables as shown above.
    Please guide me how to read the contents without "#' from the file.
    I have tried all possible ways and unable to get solution.
    Thanks,
    Shrikanth

    Hi ,
    In ECC 6 if all the unicode patches are applied then UTF 16 will defintly work..
    More over i would suggest you to ist replace # with some other  * or , and then try to see in debugging if any further  # appears..
    and no # appears then try to split now.
    if even now the # appears after replace statement then try to find out what exactly is it... wheather it is a horizantal tab etc....
    and then again try to replace it and then split..
    Please follow the process untill all the # are replaced...
    This should work for you..
    Let me know if you further face any issue...
    Regards
    Satish Boguda

  • UPLOADING tab delimited file onto FTP server

    Hello all
    Can i upload a tab delimited file onto the FTP server. If yes then how
    points guranteed if answered!!

    Hi,
    Yes you can do this one .. you can have a look at the standard program 'RSEPSFTP'.
    REPORT ZFTPSAP LINE-SIZE 132.
    DATA: BEGIN OF MTAB_DATA OCCURS 0,
    LINE(132) TYPE C,
    END OF MTAB_DATA.
    DATA: MC_PASSWORD(20) TYPE C,
    MI_KEY TYPE I VALUE 26101957,
    MI_PWD_LEN TYPE I,
    MI_HANDLE TYPE I.
    START-OF-SELECTION.
    *-- Your SAP-UNIX FTP password (case sensitive)
    MC_PASSWORD = 'password'.
    DESCRIBE FIELD MC_PASSWORD LENGTH MI_PWD_LEN.
    *-- FTP_CONNECT requires an encrypted password to work
    CALL 'AB_RFC_X_SCRAMBLE_STRING'
         ID 'SOURCE' FIELD MC_PASSWORD ID 'KEY' FIELD MI_KEY
         ID 'SCR' FIELD 'X' ID 'DESTINATION' FIELD MC_PASSWORD
         ID 'DSTLEN' FIELD MI_PWD_LEN.
    CALL FUNCTION 'FTP_CONNECT'
         EXPORTING
    *-- Your SAP-UNIX FTP user name (case sensitive)
           USER            = 'userid'
           PASSWORD        = MC_PASSWORD
    *-- Your SAP-UNIX server host name (case sensitive)
           HOST            = 'unix-host'
           RFC_DESTINATION = 'SAPFTP'
         IMPORTING
           HANDLE          = MI_HANDLE
         EXCEPTIONS
           NOT_CONNECTED   = 1
           OTHERS          = 2.
    CHECK SY-SUBRC = 0.
    CALL FUNCTION 'FTP_COMMAND'
         EXPORTING
           HANDLE = MI_HANDLE
           COMMAND = 'dir'
         TABLES
           DATA = MTAB_DATA
         EXCEPTIONS
           TCPIP_ERROR = 1
           COMMAND_ERROR = 2
           DATA_ERROR = 3
           OTHERS = 4.
    IF SY-SUBRC = 0.
      LOOP AT MTAB_DATA.
        WRITE: / MTAB_DATA.
      ENDLOOP.
    ELSE.
    * do some error checking.
      WRITE: / 'Error in FTP Command'.
    ENDIF.
    CALL FUNCTION 'FTP_DISCONNECT'
         EXPORTING
           HANDLE = MI_HANDLE
         EXCEPTIONS
           OTHERS = 1.  
    Regards
    Sudheer

  • Reading tab delimited file from application server

    Hi All,
    I do know that we need to use Open data set to read a file from application server, but my question is when you use  read DATASET v_file into wa_final -  this wa_final is an work area and also i have mentione an internal table. so do we need to Split the record at tab into the corresponding fields. Append the records into an internal table i_input.????
    Please let me know on this....
    thanks in advance....
    Poonam....

    Hi,
    first see the file contents in application server, how the contents whether the contents seperated by any symbol or not, if the contents seperated by any symbol then you have to split the data before appending to internal table.
    check this code.
    DATA: l_data_string TYPE string.
      filename = p_file.
      OPEN DATASET p_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      IF sy-subrc EQ 0.
        DO.
          READ DATASET filename INTO l_data_string.
          IF sy-subrc NE 0.
            EXIT.
          ENDIF.
          CLEAR k_input.
          SPLIT l_data_string AT '#' INTO k_input-agreement k_input-suffix k_input-status
                k_input-first_name k_input-last_name k_input-job_title k_input-tel k_input-fax k_input-email_address k_input-mob_number.
          APPEND k_input TO i_input.
        ENDDO.
      ENDIF.
    Regards,
    Venu

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • Truncated last blank column in fixed width tab delimited file

    Hi All,
               I've to generate fixed width tab delimited file in application server.As per my requiremnt  Last column in the file should be blank. When i have generated file last column is truncated because it is populated blank.
    Need solution to get the last column with blank data.
    Thanks
    Kiran

    Hi,
    The last column is not truncated I hope,bcoz it is space,u were not able to find it.
    Read the file into internal table and see in the debugging mode if it is present or not.

  • How to convert Mail attachment file Tab Delimited file to XML.

    Hi PI Experts
        I have XI scenario: MAIL  XI SAP (ABAP Proxy), the process is
    1.     XI will receive tab delimited file as attachment in mail.
    2.     XI will convert the tab delimited file into XML, then map to the target structure.
    3.     Target will be posted into ECC through ABAP Proxy.
    Can anyone help me how can I convert the attachment file (i.e. Tab delimited file) to XML while configuring the sender side.
    Thanks in Advance

    I just grabed what I answered from another thread. It should work.
    Processing sequence as follows:
    1. localejbs/AF_Modules/PayloadSwapBean Local Enterprise Bean swap1
    2. localejbs/AF_Modules/MessageTransformBean Local Enterprise Bean tran1
    3. sap.com/com.sap.aii.adapter.mail.app/XIMailAdapterBean Local Enterprise Bean mail
    Module Configuration as follows: Fill the XXXX part with your info.
    swap1 swap.keyName payload-name
    swap1 swap.keyValue MailAttachment-1
    tran1 Transform.Class com.sap.aii.messaging.adapter.Conversion
    tran1 Transform.ContentType ext/xml;charset=utf-8
    tran1 xml.conversionType SimplePlain2XML
    tran1 xml.documentName XXXXXXX_Mail
    tran1 xml.documentNamespace http://XXXXX.com.au/XXXX
    tran1 xml.fieldSeparator \t
    tran1 xml.processFieldNames fromConfiguration
    tran1 xml.structureTitle rows
    Once you set up the above configuration, you will get one record at a time.
    Create a souce message interface like the followings:
    XXXXXXXXX_Mail
    rows
    record
    Target message interface:
    XXXXXXXXX
    rows
    field 1
    field 2
    field 3
    Write a UDF function to remove the TAB space
    public removeTABSpace(String record,Container container){
    //write your code here
    StringTokenizer st = new StringTokenizer(record,"\t",false);
    String t="";
    while (st.hasMoreElements()) t += st.nextElement();
    return t;
    Write another UDF to get the field 1 for example:
    public String getField1(String input,Container container){
    int counter=0;
    int beginIndex=0;
    int endIndex=0;
    int i;
    for (i=0;i<input.length();i++){
    if (input.charAt(i)==34){
    counter=counter+1;
    if (counter==1){
    beginIndex=i+1;
    counter=0;
    break;
    for (i=0;i<input.length();i++){
    if (input.charAt(i)==34){
    counter=counter+1;
    if (counter==2){
    endIndex=i;
    counter=0;
    break;
    input=input.substring(beginIndex,endIndex);
    return input;
    Get the mapping like the followings:
    record - removeTABSpace - getField1 - field 1
    If you need to get field 2, you will need to write another UDF similar to the above one to handle it.

  • Split tab delimited file coming from application server.

    Hi,
    i have received a tab delimited file from the application server.it contains # instead of tab at the application server.
    i want to know how to split it.
    i tried decalring constant as:
    c_tab type x value'09'.
    but it gives an error saying only c,n,d and t types are allowed.
    please gelp.
    Thanks,
    Anand.

    Hi,
    Do like this
    *--Local Variables
      DATA : l_file TYPE string,
             l_line TYPE string.
    *--Clear
      CLEAR : l_file.
      l_file = p_ipfile.
    *--Read the data from application server file.
      OPEN DATASET l_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      IF sy-subrc NE 0.
    *--Error in opening file
        MESSAGE i368(00) WITH text-005.
      ENDIF.
    *--Get all the records from the specified location.
      DO.
        READ DATASET l_file INTO l_line.
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          SPLIT l_line AT cl_abap_char_utilities=>horizontal_tab
                          INTO st_ipfile-vbeln
                               st_ipfile-posnr
                               st_ipfile-edatu
                               st_ipfile-wmeng.
          APPEND st_ipfile TO it_ipfile.
        ENDIF.
      ENDDO.
    *--Close dataset
      CLOSE DATASET l_file.
    Regards,
    Prashant

  • Upload tab-delimited file from the application server to an internal table

    Hello SAPients.
    I'm using OPEN DATASET..., READ DATASET..., CLOSE DATASET to upload a file from the application server (SunOS). I'm working with SAP 4.6C. I'm trying to upload a tab-delimited file to an internal table but when I try load it the fields are not correctly separated, in fact, they are all misplaced and the table shows '#' where supposedly there was a tab.
    I tried to SPLIT the line using as separator a variable with reference to CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB but for some reason that class doesn't exist in my system.
    Do you know what I'm doing wrong? or Do you know a better method to upload a tab-delimited file into an internal table?
    Thank you in advance for your help.

    Try:
    REPORT ztest MESSAGE-ID 00.
    PARAMETER: p_file LIKE rlgrap-filename   OBLIGATORY.
    DATA: BEGIN OF data_tab OCCURS 0,
          data(4096),
          END   OF data_tab.
    DATA: BEGIN OF vendor_file_x OCCURS 0.
    * LFA1 Data
    DATA: mandt  LIKE bgr00-mandt,
          lifnr  LIKE blf00-lifnr,
          anred  LIKE blfa1-anred,
          bahns  LIKE blfa1-bahns,
          bbbnr  LIKE blfa1-bbbnr,
          bbsnr  LIKE blfa1-bbsnr,
          begru  LIKE blfa1-begru,
          brsch  LIKE blfa1-brsch,
          bubkz  LIKE blfa1-bubkz,
          datlt  LIKE blfa1-datlt,
          dtams  LIKE blfa1-dtams,
          dtaws  LIKE blfa1-dtaws,
          erdat  LIKE  lfa1-erdat,
          ernam  LIKE  lfa1-ernam,
          esrnr  LIKE blfa1-esrnr,
          konzs  LIKE blfa1-konzs,
          ktokk  LIKE  lfa1-ktokk,
          kunnr  LIKE blfa1-kunnr,
          land1  LIKE blfa1-land1,
          lnrza  LIKE blfa1-lnrza,
          loevm  LIKE blfa1-loevm,
          name1  LIKE blfa1-name1,
          name2  LIKE blfa1-name2,
          name3  LIKE blfa1-name3,
          name4  LIKE blfa1-name4,
          ort01  LIKE blfa1-ort01,
          ort02  LIKE blfa1-ort02,
          pfach  LIKE blfa1-pfach,
          pstl2  LIKE blfa1-pstl2,
          pstlz  LIKE blfa1-pstlz,
          regio  LIKE blfa1-regio,
          sortl  LIKE blfa1-sortl,
          sperr  LIKE blfa1-sperr,
          sperm  LIKE blfa1-sperm,
          spras  LIKE blfa1-spras,
          stcd1  LIKE blfa1-stcd1,
          stcd2  LIKE blfa1-stcd2,
          stkza  LIKE blfa1-stkza,
          stkzu  LIKE blfa1-stkzu,
          stras  LIKE blfa1-stras,
          telbx  LIKE blfa1-telbx,
          telf1  LIKE blfa1-telf1,
          telf2  LIKE blfa1-telf2,
          telfx  LIKE blfa1-telfx,
          teltx  LIKE blfa1-teltx,
          telx1  LIKE blfa1-telx1,
          xcpdk  LIKE  lfa1-xcpdk,
          xzemp  LIKE blfa1-xzemp,
          vbund  LIKE blfa1-vbund,
          fiskn  LIKE blfa1-fiskn,
          stceg  LIKE blfa1-stceg,
          stkzn  LIKE blfa1-stkzn,
          sperq  LIKE blfa1-sperq,
          adrnr  LIKE  lfa1-adrnr,
          mcod1  LIKE  lfa1-mcod1,
          mcod2  LIKE  lfa1-mcod2,
          mcod3  LIKE  lfa1-mcod3,
          gbort  LIKE blfa1-gbort,
          gbdat  LIKE blfa1-gbdat,
          sexkz  LIKE blfa1-sexkz,
          kraus  LIKE blfa1-kraus,
          revdb  LIKE blfa1-revdb,
          qssys  LIKE blfa1-qssys,
          ktock  LIKE blfa1-ktock,
          pfort  LIKE blfa1-pfort,
          werks  LIKE blfa1-werks,
          ltsna  LIKE blfa1-ltsna,
          werkr  LIKE blfa1-werkr,
          plkal  LIKE  lfa1-plkal,
          duefl  LIKE  lfa1-duefl,
          txjcd  LIKE blfa1-txjcd,
          sperz  LIKE  lfa1-sperz,
          scacd  LIKE blfa1-scacd,
          sfrgr  LIKE blfa1-sfrgr,
          lzone  LIKE blfa1-lzone,
          xlfza  LIKE  lfa1-xlfza,
          dlgrp  LIKE blfa1-dlgrp,
          fityp  LIKE blfa1-fityp,
          stcdt  LIKE blfa1-stcdt,
          regss  LIKE blfa1-regss,
          actss  LIKE blfa1-actss,
          stcd3  LIKE blfa1-stcd3,
          stcd4  LIKE blfa1-stcd4,
          ipisp  LIKE blfa1-ipisp,
          taxbs  LIKE blfa1-taxbs,
          profs  LIKE blfa1-profs,
          stgdl  LIKE blfa1-stgdl,
          emnfr  LIKE blfa1-emnfr,
          lfurl  LIKE blfa1-lfurl,
          j_1kfrepre  LIKE blfa1-j_1kfrepre,
          j_1kftbus   LIKE blfa1-j_1kftbus,
          j_1kftind   LIKE blfa1-j_1kftind,
          confs  LIKE  lfa1-confs,
          updat  LIKE  lfa1-updat,
          uptim  LIKE  lfa1-uptim,
          nodel  LIKE blfa1-nodel.
    DATA: END   OF vendor_file_x.
    FIELD-SYMBOLS:  <field>,
                    <field_1>.
    DATA: delim          TYPE x        VALUE '09'.
    DATA: fld_chk(4096),
          last_char,
          quote_1     TYPE i,
          quote_2     TYPE i,
          fld_lth     TYPE i,
          columns     TYPE i,
          field_end   TYPE i,
          outp_rec    TYPE i,
          extras(3)   TYPE c        VALUE '.,"',
          mixed_no(14) TYPE c        VALUE '1234567890-.,"'.
    OPEN DATASET p_file FOR INPUT.
    DO.
      READ DATASET p_file INTO data_tab-data.
      IF sy-subrc = 0.
        APPEND data_tab.
      ELSE.
        EXIT.
      ENDIF.
    ENDDO.
    * count columns in output structure
    DO.
      ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      columns = sy-index.
    ENDDO.
    * Assign elements of input file to internal table
    CLEAR vendor_file_x.
    IF columns > 0.
      LOOP AT data_tab.
        DO columns TIMES.
          ASSIGN space TO <field>.
          ASSIGN space TO <field_1>.
          ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
          SEARCH data_tab-data FOR delim.
          IF sy-fdpos > 0.
            field_end = sy-fdpos + 1.
            ASSIGN data_tab-data(sy-fdpos) TO <field_1>.
    * Check that numeric fields don't contain any embedded " or ,
            IF <field_1> CO mixed_no AND
               <field_1> CA extras.
              TRANSLATE <field_1> USING '" , '.
              CONDENSE <field_1> NO-GAPS.
            ENDIF.
    * If first and last characters are '"', remove both.
            fld_chk = <field_1>.
            IF NOT fld_chk IS INITIAL.
              fld_lth = strlen( fld_chk ) - 1.
              MOVE fld_chk+fld_lth(1) TO last_char.
              IF fld_chk(1) = '"' AND
                 last_char = '"'.
                MOVE space TO fld_chk+fld_lth(1).
                SHIFT fld_chk.
                MOVE fld_chk TO <field_1>.
              ENDIF.       " for if fld_chk(1)=" & last_char="
            ENDIF.         " for if not fld_chk is initial
    * Replace "" with "
            DO.
              IF fld_chk CS '""'.
                quote_1 = sy-fdpos.
                quote_2 = sy-fdpos + 1.
                MOVE fld_chk+quote_2 TO fld_chk+quote_1.
              ELSE.
                MOVE fld_chk TO <field_1>.
                EXIT.
              ENDIF.
            ENDDO.
            <field> = <field_1>.
          ELSE.
            field_end = 1.
          ENDIF.
          SHIFT data_tab-data LEFT BY field_end PLACES.
        ENDDO.
        APPEND vendor_file_x.
        CLEAR vendor_file_x.
      ENDLOOP.
    ENDIF.
    CLEAR   data_tab.
    REFRESH data_tab.
    FREE    data_tab.
    Rob

  • How do I get the last changed date & time of a file in app server?

    Hi Experts,
    How do I get the last changed date & time of a file in app server?
    Thanks!
    - Anthony -

    Hi,
    that's what I use...
      CALL 'C_DIR_READ_FINISH'.             " just to be sure
      CALL 'C_DIR_READ_START' ID 'DIR' FIELD p_path.
      IF sy-subrc <> 0.
        EXIT. "Error
      ENDIF.
      DO.
        CLEAR l_srvfil.
        CALL 'C_DIR_READ_NEXT'
          ID 'TYPE'   FIELD l_srvfil-type
          ID 'NAME'   FIELD l_srvfil-file
          ID 'LEN'    FIELD l_srvfil-size
          ID 'OWNER'  FIELD l_srvfil-owner
          ID 'MTIME'  FIELD l_mtime
          ID 'MODE'   FIELD l_srvfil-attri.
    *    l_srvfil-dir = p_path .
        IF sy-subrc = 1.
          EXIT.
        ENDIF." sy-subrc <> 0.
        PERFORM p_to_date_time_tz
          USING    l_mtime
          CHANGING l_srvfil-mod_time
                   l_srvfil-mod_date.
        TRANSLATE l_srvfil-type TO UPPER CASE.               "#EC TRANSLANG
        PERFORM translate_attribute CHANGING l_srvfil-attri.
        CHECK:
          NOT l_srvfil-file = '.',
          l_srvfil-type = 'D' OR
          l_srvfil-type = 'F' .
        APPEND l_srvfil TO lt_srvfil.
      ENDDO.
      CHECK NOT lt_srvfil IS INITIAL.
      pt_srvfil = lt_srvfil.
    FORM p_to_date_time_tz  USING    p_ptime  TYPE p
                            CHANGING p_time   TYPE syuzeit
                                     p_date   TYPE sydatum.
      DATA:
        l_abaptstamp TYPE timestamp,
        l_time       TYPE int4,
        l_opcode     TYPE x VALUE 3,
        l_abstamp    TYPE abstamp.
      l_time = p_ptime.
      CALL 'RstrDateConv'
        ID 'OPCODE' FIELD l_opcode
        ID 'TIMESTAMP' FIELD l_time
        ID 'ABAPSTAMP' FIELD l_abstamp.
      l_abaptstamp = l_abstamp.
      CONVERT TIME STAMP l_abaptstamp TIME ZONE sy-zonlo INTO DATE p_date
          TIME p_time.
    ENDFORM.                    " p_to_date_time_tz
    Regards,
    Clemens

  • How to generate a tab delimited file in 4.6C

    Hi ,
      I have to genarate a tab delimited file in 4.6C version. I am concatenating all fields of internal table into one text line and passing that to file.How to give tab in <b>"concatenate"</b> statement under <b>"separated by "</b>. Please suggest.
    Thanks,
    sam.

    Hello Sam,
    CONSTANTS: CON_TAB TYPE X VALUE '09'.
           CONCATENATE G_T_OUTTAB-VBELN
                        G_T_OUTTAB-POSNR
                        G_T_OUTTAB-VKBUR
                        L_F_POSID
                        G_T_OUTTAB-BSTKD
                        L_F_NETWR
                        L_F_ERDAT_OR
                        L_F_ERDAT_IN
                        G_T_OUTTAB-VBELN_IN
                        L_F_ERDAT_DE
                        G_T_OUTTAB-VBELN_DE
                        INTO OUTPUT
                        SEPARATED BY CON_TAB.
    Vasanth

  • How to export spool data to memory / save spool data to file in app server

    I need to save output from a report painter generated report to the file in app server.
    I tried submit exporting list to memory and return. but got Not_Found exception when call FM LIST_FROM_MEMORY.  I also tried to call FM CONVERT_ABAPSPOOLJOB_2_PDF with no device parameter, but the pdf output from that FM is not readable.
    Any suggestions will be greatly appreciated!

    Moderator message - Welcome to SCN.
    But please do not cross post. Have a look at Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!! and How to post code in SCN, and some things NOT to do... before posting.
    Rob

  • How can I save a Numbers spreadsheet as a tab delimited file?

    Okay, I would like to save a Numbers spreadsheet as a tab delimited file. But of course I can't do that, since Numbers only saves as a comma separated values file.
    Plan B: I can open up a CSV file created by Numbers in TextEdit, and I would like to Find and Replace all the commas with tabs. So I open up the Find window, enter a comma under Find, but of course when I put the cursor in the Replace blank and hit the Tab key, it just goes to the next blank instead of giving me a Tab.
    Is there a way I can get a Tab into that blank so I can replace all the commas with tabs?
    --Dave

    Hello
    When we use TextEdit as I described, rows are separated by Returns.
    It appears that your program requires LineFeeds.
    Here is a neat soluce.
    Save this script as an Application Bundle.
    --\[SCRIPT tsv2csv]
    (* copy to the clipboard a block of datas separated by tabs
    run the script
    You will get on the desktop a csv file named from the current date-time.
    The values separator is set to match the Bento's requirements:
    If the decimal separator is period, the script uses commas.
    If the decimal separator is comma, the script uses semi-colons.
    This may be changed thru the setting of the property maybeSemiColon.
    According to the property needLF,
    embedded Returns will be replaced by LineFeeds
    or
    embedded LineFeeds will be replaced by Returns
    Yvan KOENIG (Vallauris, FRANCE)
    le 10 octobre 2008
    property needLF : true
    (* true = replace Returns by LineFeeds
    false = replace LineFeeds by Returns *)
    property maybeSemicolon : true
    (* true = use semiColons if decimal separator is comma
    false = always uses commas *)
    --=====
    try
    set |données| to the clipboard as text
    on error
    error "the clipboard doesn't contain text datas !"
    end try
    set line_feed to ASCII character 10
    if |données| contains tab then
    if maybeSemicolon then
    if character 2 of (0.5 as text) is "," then
    set |données| to my remplace(|données|, tab, quote & ";" & quote)
    else
    set |données| to my remplace(|données|, tab, quote & "," & quote)
    end if -- character 2 of (0.5…
    else
    set |données| to my remplace(|données|, tab, quote & "," & quote)
    end if -- maybeSemiColon
    end if -- |données| contains tab
    if needLF then
    if |données| contains return then set |données| to my remplace(|données|, return, line_feed)
    set |données| to my remplace(|données|, line_feed, quote & line_feed & quote)
    else
    if |données| contains line_feed then set |données| to my remplace(|données|, line_feed, return)
    |données| to my remplace(|données|, return, quote & return & quote)
    end if -- needLF
    set |données| to quote & |données| & quote
    set p2d to path to desktop as text
    set nom to my remplace(((current date) as text) & ".csv", ":", "-") (* unique name *)
    tell application "System Events" to make new file at end of folder p2d with properties {name:nom}
    write |données| to file (p2d & nom)
    --=====
    on remplace(t, tid1, tid2)
    local l
    set AppleScript's text item delimiters to tid1
    set l to text items of t
    set AppleScript's text item delimiters to tid2
    set t to l as text
    set AppleScript's text item delimiters to ""
    return t
    end remplace
    --=====
    --[/SCRIPT]
    Yvan KOENIG (from FRANCE vendredi 10 octobre 2008 13:47:36)

Maybe you are looking for