EBCDIC files in BODS

Hi Experts,
Need help.
A quick question, is it possible to use EBCDIC files as target in a BODS Data flow? The scenario, here, is I have a EBCDIC file which I am using as a source (through COBOL Copybook format option) in a BODS job. Now I need to encrypt one particular column (say for example, account number) and pass all the data to another EBCDIC file for downstream application.
Is it possible? Can i do that in BODS?
Best Regards,
Purnima

Hi Viacheslav,
We solved our issue by editing the Excel Workbook in the DS Designer 4.1. The only modification that was needed was in fact also related to the RANGE (if it is the same as the RangeAddress). We changed it from "All fields" which was working on an unpatched 3.2 DS Linux version to a "Customer range" like "A5:Z6000" in the DS 4.1. Afterwards the job runs fine and we did not get an error anymore.
So my advise would be to control the settings in the original excel and set a custom range.
Thank you for the KB info.
Kind regards,
Sylvain

Similar Messages

  • New Line in EBCDIC file created by receiver file adapter

    Hi mates,
    I've configured the receiver file adapter to create a file in EBCDIC format by specifyin the File encoding 'Cp037'. When the file is viewed on the AS400 system using the command DSPPFM, it appears as a continuous text instead of line-by-line. I've tried specifying 0x0D(CR), 0x0A(LF), 'nl' as the endSeparator for the record type in content conversion, but no luck. Still the file looks like a continuous text.
    I learnt from the following links that the New Line character (NL or NEL) in AS400 is Unicode '0x85'.
    <a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4867251">OutputStreamWriter/InputSreamReader convert NEL to linefeed with Cp037 encoding</a>
    <a href="http://search.cpan.org/~guido/libintl-perl-1.16/lib/Locale/RecodeData/IBM037.pm#CHARACTER_TABLE">CHARACTER TABLE</a>
    But when I specify this '0x85' as the endSeparator, I get the error below as 0x85 (decimal equivalent 133) is not part of the standard ascii code i.e. basic 127 codes
    <i>Conversion initialization failed: java.lang.Exception: java.lang.NumberFormatException: Value out of range. Value:"85" Radix:16</i>
    Is there any way I can produce the EBCDIC file with new line as end separator for records?
    I appreciate your inputs.
    thx in adv
    praveen

    Instead of using binary mode for FTP and specifying encoding, endSeparator, I used the text mode and left the translation to the FTP server on AS400 system.
    Desirably, it generated the file with right encoding and new line characters.
    praveen

  • External Table for Variable Length EBCDIC file with RDWs

    I am loading an ebcdic file where the record length is stored in the first 4 bytes. I am able to read the 4 bytes using the db's native character set, ie;
    records variable 4
    characterset WE8MSWIN1252
    data is little endianBut I have to then convert each string column individually on the select, ie;
    convert(my_col, 'WE8MSWIN1252', 'WE8EBCDIC37')If I change the character set to ebcdic;
    records variable 4
    characterset WE8EBCDIC37
    data is little endianI get the following error reading the first 4 bytes;
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04019: illegal length found for VAR record in file ...We can not use the ftp conversion as the file contains packed decimals.
    There are other options for converting the file but I am wondering if was able to get an external table to read a native ebcdic file without a pre-process step.

    I am loading an ebcdic file where the record length is stored in the first 4 bytes. I am able to read the 4 bytes using the db's native character set, ie;
    records variable 4
    characterset WE8MSWIN1252
    data is little endianBut I have to then convert each string column individually on the select, ie;
    convert(my_col, 'WE8MSWIN1252', 'WE8EBCDIC37')If I change the character set to ebcdic;
    records variable 4
    characterset WE8EBCDIC37
    data is little endianI get the following error reading the first 4 bytes;
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04019: illegal length found for VAR record in file ...We can not use the ftp conversion as the file contains packed decimals.
    There are other options for converting the file but I am wondering if was able to get an external table to read a native ebcdic file without a pre-process step.

  • Can File Adpater read EBCDIC file on the sender side

    Hi ,
    I Have file to Proxy scenario and on the sender side I have EBCDIC file can file adpater can read it into normal text format can any one suggest me how to encode the file
    Thanks Inadvance
    Byee

    Hi ,
    Since my scenario is File to Proxy I need to send data to Proxy I cant send it in EBCDIC format ..... So I want data to bein normal text format........
    I am looking for it .........
    for EJB do you have any piece of code... that will be helpfull
    Thanks Inadvance....

  • Updation of WSDL file in BODS 4.0

    Hi All,
    The salesforce is upgraded to the latest version and the API version that we currently using is 31.
    In BODS we are using old version of web service endpoint and when updated to the new web service endpoint  https://eu1.salesforce.com/services/Soap/u/30.0   we are unable to connect to datastore.
    Salesforce team has suggested us to update the WSDL file as metadata has changed.
    Could someon tell me  where can  I find  .WSDL file   in BODS  4.0 and how to upgrade the WSDL file to the latest version given by salesforce.?
    Thanks in Advance!!
    Thanks & Regards,
    Indu.
    https://login.salesforce.com/services/Soap/u/21.0

    Hi Again
    I have more info on the issue...
    I tried to reach the the webservice from OAS/HTTP with mod_plsql directly. At first I was not able to.
    A firewall request was completed and now I am sure the http server can reach the webservice in question.
    Unfortunatley creating a webservice either manually or through a wsdl is still not working! Please Help , what else can I check.
    Also
    I'm confused ..taken from APEX 4.0 online documenation
    SOAP offers two primary advantages:
    SOAP is based on XML, and therefore easy to use.
    SOAP messages are not blocked by firewalls because this protocol uses simple transport protocols, such as HTTP.
    I can't see why it would be a firewall issue based on the statement above.
    Thanks Again
    Moe
    -- just added to my post. Issue still unresolved
    I'vw been trying to troubleshoot it myself.
    From my database server -
    I'm not able to use utl_http - example
    set serveroutput on
    DECLARE
    req utl_http.req;
    resp utl_http.resp;
    value VARCHAR2(32000);
    BEGIN
    req := utl_http.begin_request('http://www.psoug.org');
    resp := utl_http.get_response(req);
    value := utl_http.request('http://www.psoug.org/');
    dbms_output.put_line(value);
    utl_http.end_response(resp);
    EXCEPTION
    WHEN utl_http.end_of_body THEN
    utl_http.end_response(resp);
    END;
    error received is
    ERROR at line 1:
    ORA-29273: HTTP request failed
    ORA-06512: at "SYS.UTL_HTTP", line 1130
    ORA-12535: TNS:operation timed out
    ORA-06512: at line 6
    Using ping
    ping www.psoug.orgPING 66.221.222.85: 64 byte packets
    ----66.221.222.85 PING Statistics----
    16 packets transmitted, 0 packets received, 100% packet loss
    However I'm confused if I'm using Oracle Application Server 10g Release 3 (10.1.3.1)/Apache
    Should I need to modify the firewall on the database side to access a webservice? Does it need to be port 80?
    Getting frustrated by my ignorance
    Thanks
    Moe
    Edited by: user570478 on Jul 28, 2011 2:23 PM

  • Load from EBCDIC file to Oracle 9i tables using UTL_FILE

    Hello, I have a requirement to load EBCDIC file from Mainframe to load to Oracle 9i tables and then do some transformation. Then again create EBCDIC file from database table. I'm not sure if this is possible using UTL_FILE, though i have seen people loading using SQL * Loader. If possible, can you please give some sample code for this? I would appreciate your help
    Thanks
    Karuna

    Hi,
    I'm reading data from EBCDIC file in Oracle PL/SQL using UTL_FILE. I wasn't able to read BINARY data type from EBCDIC to Oracle. Initially i thought the problem was due to the following reasons discussed in the article.
    http://support.sas.com/techsup/technote/ts642.html
    <quote>
    Solutions
    The only way to overcome the problem of non-standard numeric data being corrupted by the FTP is to move the data without translating it. This will necessitate making some significant changes in your program. It may also require preprocessing the data file on the mainframe. The sections below list the different types of files and situations, a recommended approach to read in the file, and a sample program to accomplish the task.
    </quote>
    But we have confirmed that the contents of EBCDIC file is fine by looking into the EBCDIC file using a tool that will convert EBCDIC to ASCII. The contents are absolutely ok.
    Now how do i read the Binary data from EBCDIC file.
    My code is like this...
    Open the file using UTL_FILE.FOPEN
    UTL_FILE.GET_LINE(file_handler,string,lengthofthestring)
    DBMS_OUTPUT.PUT_LINE(SUBSTR(CONVERT(string,ASCIIUS7,EBCDIC),1,4));
    --This is generating an output as "&". The actual data is
    --005. Since this is
    --declared as binary in EBCDIC file, I'm unable to read
    --and print it.
    --same is the case with other binary data types.
    --I'm able to read the other datatypes clearly
    UTL_FILE.FCLOSE.
    How do I resolve this? I would appreciate your help on this. This is something critical and immediate requirement for us.
    Thanks
    Karuna

  • Problem in loading EBCDIC file

    I have a flat file in ebcdic format and trying to load that using external table. The .sql is as below
    DROP TABLE EVP0050;
    CREATE TABLE EVP0050
    ORG_KEY2 NUMBER(3),
    ACCT_KEY2 VARCHAR2(19),
    SCHEME_KEY2 VARCHAR2(5),
    REV_BATCH_DATE_KEY2 NUMBER(7)
    ORGANIZATION EXTERNAL
    TYPE ORACLE_LOADER DEFAULT DIRECTORY EC3D_DATA_DIR
    ACCESS PARAMETERS
    RECORDS VARIABLE 5 CHARACTERSET WE8EBCDIC37
    NOBADFILE
    LOGFILE 'evp0050'
    NODISCARDFILE
    READSIZE 10485760
    LOAD WHEN ((1:2)='DD')
    FIELDS
    MISSING FIELD VALUES ARE NULL
    ORG_KEY2 POSITION(3) CHAR(3),
    ACCT_KEY2 POSITION(6) CHAR(19),
    SCHEME_KEY2 POSITION(25) CHAR(5),
    REV_BATCH_DATE_KEY2 POSITION(30) DECIMAL(7)
    LOCATION
    'file1.dat'
    REJECT LIMIT unlimited;
    SELECT REV_BATCH_DATE_KEY2 from evp0050 where rownum<5;
    The field REV_BATCH_DATE_KEY2 is REV-BATCH-DATE-KEY2 S9(7) and it length is 4. How ever I am not able to oad it into the table.
    Oracle version : 10.2.0.2.0
    Can any lone help me out.

    Yes you are right, its a cobol syntax. The copy book is pasted below. Can you tell me what data type should I use.
    16 OCT 2008 FILE-AID 9.1.0 PRINT FACILITY 15:20:39 PAGE 1
    RECORD LAYOUT REPORT
    RECORD LAYOUT DATASET : DAIW.IMG.WR5820.COPY.LMS
    MEMBER : TEST012
    ------- FIELD LEVEL/NAME ---------- PICTURE FLD START END LENGTH
    :$$$$$$:-RECORD 1 500 500
    2 :$$$$$$:-RECORD GROUP 1 1 500 500
    3 :$$$$$$:-KEY-AREA GROUP 2 1 45 45
    5 :$$$$$$:-KEY GROUP 3 1 26 26
    7 :$$$$$$:-ORG-ACCT GROUP 4 1 22 22
    9 :$$$$$$:-ORG-X GROUP 5 1 3 3
    11 :$$$$$$:-ORG 999 6 1 3 3
    9 :$$$$$$:-ACCT X(19) 7 4 22 19
    7 :$$$$$$:-REC-NBR 9(8) 8 23 26 4
    5 :$$$$$$:-FILLER X(19) 9 27 45 19
    3 :$$$$$$:-KEY-DEF2 REDEFINES :$$$$$$:-KEY-AREA
    3 :$$$$$$:-KEY-DEF2 GROUP 10 1 45 45
    5 :$$$$$$:-KEY2 GROUP 11 1 35 35
    7 :$$$$$$:-ORG-ACCT-KEY2
    GROUP 12 1 22 22
    9 :$$$$$$:-ORG-KEY2 999 13 1 3 3
    9 :$$$$$$:-ACCT-KEY2 X(19) 14 4 22 19
    7 :$$$$$$:-SCHEME-KEY2 X(5) 15 23 27 5
    7 :$$$$$$:-REV-BATCH-DATE-KEY2
    S9(7) 16 28 31 4
    7 :$$$$$$:-REC-NBR-KEY2 9(8) 17 32 35 4
    5 :$$$$$$:-FILLER X(10) 18 36 45 10

  • Process unicode and xlsx files through BODS

    Dear Experts,
    Could you please help me with the following scenario:
    System: BODS 3.2 on Linux Server
    Our clients want to send their source data in "xlsx" and "unicode" files created in windows. Does BODS 3.2 or any higher version on linux process these file types?
    Thanks,
    Santosh

    Dear Experts,
    Can anyone help me out with the Unicode as well. I found that linux only process file of character set UTF-8 and since the unicode file create on Windows is of Unicode UTF-16, BODS 3.2 on linux cannot process it, I am assume that this is a linux issue and not BODS.
    Could someone help with any solution or work-around
    Thanks,
    Santosh

  • Unable to read multiple files in BODS

    hi all,
    i am unable to read multiple files [with same format of fields] using wild card characters in file name.
    scenario:
    i have 2 files: test1.xlsx & test2.xlsx
    in the excel file format, for the file name column, i have given test*.xlsx.
    and done the direct mapping to target column.
    but when i run the job i am getting below error.
    at com.acta.adapter.msexceladapter.MSExcelAdapterReadTable.ReadAllRows(MSExcelAdapterReadTable.java:1242)
    at com.acta.adapter.msexceladapter.MSExcelAdapterReadTable.readNext(MSExcelAdapterReadTable.java:1285)
    at com.acta.adapter.sdk.StreamListener.handleBrokerMessage(StreamListener.java:151)
    at com.acta.brokerclient.BrokerClient.handleMessage(BrokerClient.java:448)
    at com.acta.brokerclient.BrokerClient.access$100(BrokerClient.java:53)
    at com.acta.brokerclient.BrokerClient$MessageHandler.run(BrokerClient.java:1600)
    at com.acta.brokerclient.ThreadPool$PoolThread.run(ThreadPool.java:100)
    please let me know if there is any solution to this.
    regards,
    Swetha

    Hi,
    i just copied a xlsx file with 3 different names (Test_Data.xlsx, Test_1.xlsx, Test_2.xlsx) and tried with below options and it worked for me.
    Note: I tried on the same OS and DS 4.1 SP2(14.1.2.378)versions. In Linux File names are case sensitive.

  • Comparison of two files in EBCDIC format

    Hi boarders..
    I have a file which is there in ASCII format. I convert it to EBCDIC format using the below code
    import java.io.*;
    public class DcollType {
    static String readInput() {
        StringBuffer buffer = new StringBuffer();
        try {
         FileInputStream fis = new FileInputStream("CLR812AX_TST5000.txt");
         InputStreamReader isr = new InputStreamReader(fis, "ASCII") ;
         Reader in = new BufferedReader(isr);
         int ch;
         while ((ch = in.read()) > -1) {
              buffer.append((char)ch);
         in.close();
         return buffer.toString();
        } catch (IOException e) {
         e.printStackTrace();
         return null;
    static void writeOutput(String str) {
        try {
         FileOutputStream fos = new FileOutputStream("ascii.bin");;
         Writer out = new OutputStreamWriter(fos, "Cp1047" );
         out.write(str);
         out.close();
        } catch (IOException e) {
         e.printStackTrace();
    public static void main(String[] args){
    String inputstr=readInput();
    writeOutput(inputstr);
      Now once i produce this file, i need to compare this file, with another EBCDIC file.
    This second EBCDIC file is got by converting the original ASCII file using Mainframe emulator tool.
    I want the boarders help in wrting a code which will help me in comparing tw0 EBCDIC file's content byte by byte..

    Eh? Just compare them. Since you're just comparing the bytes, you don't need a java.io.Reader; you just need a java.io.InputStream. (Or two, really.)
    You've already shown you can handle this...just open two input streams, one for each file, and compare each byte, byte-by-byte. As soon as two bytes are not equal, you've shown that they're not the same. If you reach the end of both the files and all the bytes are the same, then they have identical contents (regardless of encoding).
    Maybe you're being thrown by the encoding? It's really not relevant if you're just checking to see if the bytes are the same.

  • Ebcdic to ascii file translation woes

    I am trying to convert an ebcdic file into an ascii file but i am running into some difficulties.
    The translation works fine, but it outputs everything onto one line when opened up in notepad on NT, with a symbol "&#1649;" where a new line should be. When the same file is pasted into WordPad these sybols are interpreted into new lines.
    Is there a way i can get the sybols replaced by new lines when opened up in Notepad?
    import java.io.*;
    class FileReaderDemo2
         public static void main (String args[])
              try
                   //create and input stream reader froman ebcdic file                    
                   FileInputStream inputStream = new FileInputStream ("input.ebc");
                   //convert from ebcidic to unicode
                   InputStreamReader inputReader = new InputStreamReader(inputStream,"Cp500");
                   //attach a buffer to it
                   BufferedReader buffReader = new BufferedReader (inputReader);
                   System.out.println("Character Encoding is: " + inputReader.getEncoding());          
                   //output file in ascii Cp1252 from unicode
                   OutputStreamWriter outWriter = new OutputStreamWriter(new FileOutputStream("Output2.txt"),"Cp1252");
                   String str = null;
                   while ((str = buffReader.readLine()) !=null)
                        outWriter.write(str +"\n");
                   inputReader.close();
                   inputStream.close();
                   outWriter.close();
              catch (Exception e)
                   System.out.println("Exception: " + e);

    Thanks for that.
    I need to do the reverse now, so i have a ascii file i need to convert to ebcidic. Once again the conversion works ok but the output is all on one line.
    I've tried
    outWriter.write(str + "\n");
    and
    outWriter.write(str + "\r\n");
    and
    outWriter.write(str + System.getProperty("line.separator"));
    But none of them seem to work.

  • Is XML adapter should be installed in BODS server in order to access XML files?

    Hi All,
    I have doubts while working withe XML/XSD files.
    1.Is XML adapter should be installed in BODS server in order to access XML files?
    2.What is startup_script.xml that we need to set up in Admin side.
    Thanks,
    Deepthi

    Hi Shiva,
    Thanks for reply. With out XML adapter I am able to load .xml file throght BODS using template XML. But I am notable to view the file properly like it is not showing schema.
    I doubt I need to check with XML source using BODS.But I have few doubts.
    1. XSD in local mechine and XML in Remote server (FTP).- In this case I am able to read the stucture of XSD.
    2. XSD and XML in FTP server-- In this case I am unable read the XSD structure from FTP server
    Please suggest how access XSD and XML in the job with out error.
    Thanks,
    Deepa

  • Zip files from FTP server using BODS 4.1

    HI Friends,
    My requirement: Move zip files from FTP server to Target server using Data services.
    The zip files( variable) are loaded daily into a directory in an FTP server, I need to establish a connection to the FTP server and get the zip files into BODS Environment. Can anyone please list out the steps to get the connection from FTP to BODS?
    My Environment : BODS 4.1 SP1
    Thanks and Regards
    Anil

    Hi Anil,
    We have done similar kind of requirement
    1. Connect FTP server
    2. call Exec command
    3. write simple script move file to whatever your location
    Please let me know
    FYI.. Let me check if get same ATL for you.
    Regards,
    Manoj.

  • XML Parse error while loading an XML file

    HI Folks,
    I was trying to load and XML file into BODS.. The XML file is well-formed and the same when tested in other tools  is getting loaded without any issues..
    I have created a XML-File format with the corresponding XSD..
    But here in BODS it is giving - Parse error
    1) when i try to view the data of the source XML in my dataflow ..it is giving "XML Parser Failed".. and not able to show data..
    2) When I run my job i get the same pares error - with details as under..
    ---> Error here is "Unable to recognize element 'TAB' " or some time is say " Element TAB should be qualified"
    Please guide with this if you have any info..thanks
    I'm pasting the XML source file format here for your reference:--
      <?xml version="1.0" encoding="iso-8859-1" ?>
    - <asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
    - <asx:values>
    - <TAB>
    - <items>
    + <CUSTOMER_RECORD>
      <CUSTOMER_NUMBER>1111111111</CUSTOMER_NUMBER>
      <NAME_1>ABC</NAME_1>
      <NAME_2>OFM/COMMERCIAL ACCOUNTS</NAME_2>
      <STREET_1>31 CENTER DRIVE MCS2045</STREET_1>
      <STREET_2 />
      <CITY>BETHESDA</CITY>
      <STATE_CODE>MD</STATE_CODE>
      <POSTAL_CODE>20892-2045</POSTAL_CODE>
      <COUNTRY_CODE>US</COUNTRY_CODE>
      <ORDER_BLOCK />
      <ERP_CREATE_DATE>20040610</ERP_CREATE_DATE>
      <ERP_CREATED_BY>DGUPTA</ERP_CREATED_BY>
      <ERP_MODIFY_DATE>20120201</ERP_MODIFY_DATE>
      <ERP_MODIFIED_BY>LWOHLFEI</ERP_MODIFIED_BY>
      <INDUSTRY_CODE>0103</INDUSTRY_CODE>
      <ACCOUNT_GROUP_ID>0001</ACCOUNT_GROUP_ID>
      <SALES_NOTE />
      <ADDRESS_NOTE />
      <CUSTOMER_CLASSIFICATION_CODE>02</CUSTOMER_CLASSIFICATION_CODE>
      <GLN_NUMBER />
      <PREVIOUS_ACCT_NO />
      <ACCOUNT_TYPE />
      <GAG />
      <SDI_ID />
      <HOSP_ID />
      <HIN />
      <DUNS />
      <PO_BOX />
      <POB_CITY />
      <POB_ZIP />
      <PHONE_NUMBER>77777</PHONE_NUMBER>
      <EMAIL_DOMAIN />
      <REQUESTER />
      <ERP_SOURCE_SYSTEM>ECC</ERP_SOURCE_SYSTEM>
      </CUSTOMER_RECORD>
    - <SALES_ORG_DATA>
    + <item>
      <CUSTOMER_NUMBER>1111111111</CUSTOMER_NUMBER>
      <SALES_ORG>0130</SALES_ORG>
      <CUSTOMER_GROUP>03</CUSTOMER_GROUP>
      <ORDER_BLOCK_CODE />
      <ERP_SOURCE_SYSTEM>ECC</ERP_SOURCE_SYSTEM>
      </item>
    + <item>
      <CUSTOMER_NUMBER>1111111111</CUSTOMER_NUMBER>
      <SALES_ORG>0120</SALES_ORG>
      <CUSTOMER_GROUP>11</CUSTOMER_GROUP>
      <ORDER_BLOCK_CODE />
      <ERP_SOURCE_SYSTEM>ECC</ERP_SOURCE_SYSTEM>
      </item>
      </SALES_ORG_DATA>
      </items>
      </TAB>
      </asx:values>
      </asx:abap>

    Pierre,
    Depending on the object "myLastFile", the method openDlg might not even exist (if the myLastFile object is not a File object, for instance). And I do not see any need for the myLastFile anyhow, as you are presenting a dialog to select a file to open. I recommend using the global ChooseFile( ) method instead. This will give you a filename as string in full path notation, or null when no file was selected in the dialog. I am not sure what your ExtendScript documentation states about the return value for ChooseFile, but if that differs from what I am telling you here, the documentation is wrong. So, if you replace the first lines of your code with the following it should work:
    function openXMLFile ( ) {
        var filename = ChooseFile ( "Choose XML file ...", "", "*.xml", Constants.FV_ChooseSelect );
    While writing this, I see that Russ has already given you the same advice. Use the symbolic constant value I indicated to use the ChooseFile dialog to select a single file (it can also be used to select a directory or open a file - but you want to control the opening process yourself). Note that this method allows you to set a start directory for the dialog (second parameter). The ESTK autocompletion also gives you a fifth parameter "helplink" which is undocumented and can safely be ignored.
    Good luck
    Jang

  • DATA TRANSFER - How to get a SINGLE SPACE in downloaded txt file from UNIX?

    Hi Experts,
    Am sending data from SAP to UNIX/ Application server and text file on desk top as well.
    So, I am keeping a single character just SPACE at the END of each record.
    Then, When I see the downloaded text file, I found a SINGLE SPACE at the end of each record, fine.
    Then, by using CG3Y t code, I downloaded the UNIX file to my desk top.
    But, When I see this UNIX downloaded file from UNIX, I did NOT find any SPACE at the end of each record!!!
    Am doing every thing same in both cases.
    So,
    1 - Why its happening in case of UNIX file?
    2 - How to get a SINGLE SPACE  at the END in the downloaded file from UNIX?
    thanq

    Its there, I am talking abut this -
    OPEN DATASET - linefeed
    Syntax
    ... WITH { NATIVE
             | SMART
             | UNIX
             | WINDOWS } LINEFEED ... .
    Alternatives:
    1. ... WITH NATIVE LINEFEED
    2. ... WITH SMART LINEFEED
    3. ... WITH UNIX LINEFEED
    4. ... WITH WINDOWS LINEFEED
    Effect
    : These additions determine which line end marker is used for text files or legacy text files. If these additions are used, the profile parameter abap/NTfmode is ignored. Simultaneous specification of the values "UNIX" or "NT" in the addition TYPE is not permitted.
    If these additions are not used, the line end marker is determined as follows, depending on the operating system of the current application server:
    The line end marker for Unix is "LF". Under Unix, OS390 and OS400, only "LF" is used for reading and writing.
    The line end marker for MS Windows is "CRLF". Under MS Windows, however, the values of the profile parameter abap/NTfmode can also be used to set whether new files are handled according to Unix conventions or Windows conventions. If the profile parameter has the value "b", the Unix line end marker "LF" is used. If the profile parameter has the value "t" or is initial, the Windows line end marker "CRLF" is used. The setting using the profile parameter can be overridden with the addition TYPE and the value "UNIX" or "NT". If an existing file is opened without the addition TYPE, this is searched for the first line end marker ("LF" or "CRLF"), and this is used for the whole file. If no line end marker is found, the profile parameter is used. This applies particularly if an existing file is completely overwritten with FOR OUTPUT.
    If an addition WITH NATIVE|SMART|UNIX|WINDOWS LINEFEED is used, this setting can be changed for the open file using the statement SET DATASET. If neither of the additions is used, the line end marker also cannot be changed using SET DATASET.
    Notes
    : Without the use of an addition WITH LINEFEED, the line end marker is dependent on diverse implicit factors such as the operating system of the application server, a profile parameter, and line end markings that are already used. For this reason, the explicit use of WITH LINEFEED is recommended, which renders the use of the addition TYPE for setting the line end marker obsolete.
    The line end marker that is currently used can be determined for every open file using GET DATASET.
    Alternative 1
    ... WITH NATIVE LINEFEED
    Effect
    : This addition defines the line end marker independently of the access type according to the operating system of the application server, i.e. "LF" for Unix OS390 or OS400, and "CRLF" for MS Windows.
    The line end marker is interpreted according to the current codepage. If a code page is explicitly specified using the addition CODE PAGE, the characters of the line end marker must exist be available or be written in accordance with this code page.
    Note
    : The addition WITH NATIVE LINEFEED is intended for editing files on an application server that can also be accessed by other means. The addition receives the appropriate line end marker without the program needing to know the operating system.
    Alternative 2
    ... WITH SMART LINEFEED
    Effect
    : This addition depends on the access type:
    In files that are opened for reading using FOR INPUT, both "LF" and "CRLF are interpreted as a line end marker. When opening an EBCDIC file with the addition CODEPAGE, in addition to "LF", "CRLF", and the EBCDIC character strings, the corresponding ASCII character strings are also recognized. In addition, the EBCDIC character "NL" (line separator) is also interpreted as a line end marker.
    In files opened for appending or changing with FOR APPENDING or FOR UPDATE, the program searches for a line end marker that is already used in the file. In this process, first the end of the file is identified. If no line end marker is found there, a certain number of characters at the beginning is analyzed. If a line end marker is found, this is used when writing to the file. This is also affected by the addition CODE PAGE. For example, ASCII line end markers are recognized and used in a file opened with EBCDIC, but not the other way round. If no line end marker is found or no search is possible (for example, if the file is opened with the addition FILTER), the line end marker is determined according to the operating system of the application server, as with the addition WITH NATIVE LINEFEED.
    In files opened for writing using FOR OUTPUT, the line end marker is determined according the operating system of the application server, as with the addition WITH NATIVE LINEFEED.
    Note
    : The addition WITH SMART LINEFEED is intended for the generic editing of files in heterogeneous environments. The line end marker is recognized and set for different formats. The use of this addition is the best solution for most application cases.
    Alternative 3
    ... WITH UNIX LINEFEED
    Effect
    : The line end marker is set to "LF" regardless of the access type and operating system of the application server.
    The line end marker is interpreted according to the current code page. If a code page is specified explicitly using the addition CODE PAGE, the characters of the line end marker must be available or be written according to this code page.
    Note
    : The addition WITH UNIX LINEFEED is intended for editing Unix files in which the specific line end markers are to be retained, even if the operating system of the current application server is MS Windows.
    Alternative 4
    ... WITH WINDOWS LINEFEED
    Effect
    : The line end marker is set to "CRLF" regardless of the access type and operating system of the application server.
    The line end marker is interpreted according to the current code page. If a code page is specified explicitly using the addition CODE PAGE, the characters of the line end marker must be available and be written according to this code page.
    Note
    : The addition WITH WINDOWS LINEFEED is intended for use with MS Windows files in which the specific line end marker is to be retained, even if the operating system of the current application server is Unix, OS390 or OS400.

Maybe you are looking for

  • Firefox is set as my default browser but will not open. I can access other browsers

    Firefox crashed while I was browsing. I re-started the LT and now Firefox will not start. I have tried re-starting from the desktop, the tool bar and the start menu.I can use other browsers which are not my default browser eg Googlechrome. This is no

  • How do I cancel a function

    after assigning sum to a column, I cannot find a way to disable or delete the function.

  • Dreamweaver CC 2014.1 quits unexpectedly

    I am using Dreamweaver CC 2014.1 on a Mac with Yosemite. Right after I try to open a new or existing file, it quits unexpectedly. Already tried uninstalling and installing again. This is really blocking me... Tnks

  • Database or tablespace compression

    db 11.1.0.7 EBS R12.1.1 Hello, It look very tough to find the tables and objects and then compress. I am looking for commands which compress whole database or compress tablespaces one by one. I want to comress 150 Gb db if I compress all the tablespa

  • JavaHelp with JavaWebStart: grey frame

    Hello all, I have my help in HTML pages on a Web server. The client is using JavaHelp to display these HTML pages. It works fine, if I run my client through an IDE (like JBuilder). However, it does not work, if I run using JavaWebStart. I get a grey