Issue with Xi File Receiver "Content Conversion" fixed length and kanji

I need to create a fixed length file for a customer that has kanji (SJIS) characters in it.  The issue is when the length of the value is less than the fixed length.  It is padding out with spaces, but it is putting 2 bytes per space instead of one (But it seems to have counted the padding in characters...).
.fieldFixedLengths: 8,6,40,40
.fixedLengthTooShortHandling: Cut
.fieldNames: date,time,name1,name2
Receiver, please don't answer for sender.
File Adapter

Can you please tell what is coming in the output file if the values are:
date:  062309
time:   2240
name1: hello
name2: hello2
When you put space here and if it is not showing correctly just to interpret you can use S which will stand for space.

Similar Messages

  • Complex file receiver content conversion

    My challenge is to change this xml structure to a flat file structure using content conversion in a file receiver adapter. My problem is, that I have a record within a record and both records can occur multiple times:
          <ResultList>  (0..unbound)
                <ResultDetail> (0..unbound)
    Any suggestions?
    Maybe some sort of xml flattener before doing the content conversion would do the trick, but then again how is that to be done?
    BR MIkael

    a small trick might help - /people/shabarish.vijayakumar/blog/2010/01/14/file-conversion-using-nodeception
    Also do read - /people/shabarish.vijayakumar/blog/2007/08/03/file-adapter-receiver--are-we-really-sure-about-the-concepts

  • File receiver content conversion fields attributes

    i am trying to use file adapter to write this xml to flat file but all attributes of the fields are not written to file ( only elements are written).
    the xml file is:
    <?xml version="1.0" encoding="utf-8"?>
    <n0:MT_Mits_Claims xmlns:prx="" xmlns:n0="">
          <File_Descr>CLAIM RESULT DATA</File_Descr>
          <Header_EA File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="EA" Claim_Type="P" Division="" RFC_Seq_No="" Filler=""/>
          <Header_HA File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Page_ID="01" Line_ID="HA" VIN="VIN260" Faliure_Date="200810" Odometer_Reading=" 1204" Sold_Date="080820"/>
          <Header_HB File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="HB" Position_Code="111221" A_Code="12" B_Code="03" C_Code="1" Ref_Code="" Manual_Control="" Comment_Code="" Ratio_Labor="" Ratio_Parts="" Manuf_Code="" Filler=""/>
          <Details_LA_LE File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="LA" Labor_Pos_Code="" Work_Code="99" Qty="10" Amount="" Ratio="" Comment_Code="" Factory="" Filler=""/>
          <Details_LA_LE File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="LA" Labor_Pos_Code="231110" Work_Code="10" Qty="01" Amount="" Ratio="" Comment_Code="" Factory="" Filler=""/>
          <Details_PA_PK File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="0211" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="PA" Parts_No="" Qty="01" Faliure_Origin="X" Price="0050000" Ratio="" Comment_Code="" Factory="" Filler=""/>
    the content conversion paramters are:
    Header.addHeaderLine 0
    Header.fieldFixedLengths 6,4,2,30,7,31,0
    Header.fixedLengthTooShortHandling Cut
    Details.addHeaderLine 0
    Details.fieldSeparator 'nl'
    Trailer.addHeaderLine 0
    Trailer.fieldFixedLengths 3,4,2,3,65
    the output file that i get is:
    GDM001KS  H1CLAIM RESULT DATA             0000008                              
    GEEKS  E1END         
    What i need to do to get attributes in file?

    Assuming the scenario you are trying to implement is XML file to Flat file. You need to map the input structure including attributes to the flat file structure. Then in your content conversion output the flat file structure.

  • File Receiver Content Conversion

    I have the following XML:
                   <General_Name>Edu and Train Access</General_Name>
                   <Description>Education and Training Access</Description>
                   <Person_Responsible>Shala Karan</Person_Responsible>
                   <Department>ErlyChld Prg</Department>
    I want to write it to a CSV file. However, I only want to write the values in the <Record> element and ignore everything else. Is this possible? At the moment, in my content conversion, I have Record entered in the Recordset Structure field. In the details below that I am setting the Record.addHeaderLine, Record.headerLine, Record.fieldSeparator, and Record.endSeparator attributes.
    However, what this generates is the following:
    It includes the <n0:ESB_Header> values (which I dont want) and only the first value of <Record>.
    Ideally the output would look like this:
    0010100010|Edu and Train Access|1900-07-01|9999-12-31|Education and Training Access|Shala Karan|ErlyChld Prg|1|1010|AUD|Z00001
    Notice also that the Header is pipe delimited. Is there anyway to set this? So far I can only get the header to be comma delimited.

    I want to write it to a CSV file. However, I only want to write the values in the <Record> element and ignore everything else. Is this possible?
    Yes its possible. Rest all configuration seems to be fine. Refer the following link.
    Try to refresh cache as well. It might be taking old configuration.

  • HT6015 Issues with .vpptoken files how do I fix the following error?

    I'm probably being silly here but I can't seem to find out how to fix this issue, I have downloaded a .vpptoken to distribute app's to our networks iPads via OS X server for mavericks but receive the following error when trying to pull the .vpptoken into the OS X server application...
    Error Configuring VPP Managed Distribution.
    Unable to configure VPP managed distribution with the given token.
    This token was downloaded via the VPP program set in place by apple themselves and I'm now unable to use the app's purchased by us, I need to fix this ASAP or receive a full refund for the purchases so that I can re-buy them using redeemable codes and just use apple configurator, I was hoping to save myself some time doing this via the VPP token system obviously I was wrong.
    Best Regards

    For those who have also encountered this, it appears that apples servers were not supplying the correct .vpptoken file and since downloading a new .vpptoken today file I have been able to use vpp and the error has been resolved.

  • File Sender, Content Conversion - how to define variable length last field?

    XI 3.0 SP17
    With a File Sender communication channel, that uses Content Conversion - how do I define a 'variable length' last field?
    The scenario - the input file has four fields, of which the first three are a known fixed length, and the last (fourth, trailing) field is variable in length.
    Using a Message Protocol of 'File Content Conversion', how do I define that last variable length field (field name 'WOData' below) in the Content Conversion Parameters section?
    My current parameters are:
    Recordset Structure  -  Row,*
    ignoreRecordsetName  -  true
    Row.fieldFixedLengths  -  1,12,5,99999
    Row.fieldNames  -  WOType,WONum,WOLine,WOData
    I've tried the following for 'Row.fieldFixedLengths' to no avail -
    The last two were grasping at straws )
    The only thing I've got to work is specifying a 'large' value for the final field (99999 above).
    In addition, does anyone know if specifying a large value (e.g. 99999) for the final trailing field will give rise to performance issues when the file is being processed?
    In the help for "Converting File Content in a Sender Adapter", it states -
    <Begin Quote>
    If you make a specification here, the system expects a character string that contains the lengths of the structure columns as arguments separated by commas.
    If you also specify a separator for the columns, you must not add its length to the length of the columns.
    This entry is mandatory if you have not made an entry for NameA.fieldSeparator.
    <End Quote>

    << note that fieldFixedLengths will not take any wildcard entries like *. So in these case it is ideal to provide a maximum char length.  But note that while the file is being created that many spaces will be created in your file !!! >>
    Hi Shabarish,
    Yes, no wildcard is the conclusion I came to, hence my maximum )
    The message size did not increase by any 'blank padding'.  When I look in [Message Display Tool (Detail Display)] 'Audit Log for Message: X'  -
    2006-10-17 18:22:42 Success Channel X: Entire file content converted to XML format
    2006-10-17 18:22:42 Success Send binary file  "X" from FTP server "X", size 103290 bytes with QoS EO
    2006-10-17 18:22:42 Success Application attempting to send an XI message asynchronously using connection AFW.
    2006-10-17 18:22:42 Success Trying to put the message into the send queue.
    2006-10-17 18:22:42 Success Message successfully put into the queue.
    2006-10-17 18:22:42 Success The application sent the message asynchronously using connection AFW. Returning to application.
    The input flat file in non-XML format was 92,132 bytes and the message payload into XI was 103,290 bytes.
    My understanding is that trailing spaces are stripped from XML nodes.

  • File Sender Content Conversion: Help needed

    Hello Experts,
    i need help with file sender content conversion:
    i have a file which looks like this:
    12329460  24.01.09/07:01  167     Y010122851  136086  43300007            E70115  1L2_96_1
    12329660  25.01.09/07:02  157     Y010122851  136086  43390007            E711J5  1L2_96_1
    as you can see 8 fields, separated with whitespaces
    and i want and xml file which looks like this:
    Would you please let me know how the datatype has to look like?
    And especially how the FCC has to be configured for this scenario?
    Thanks in advance,

    > With fixed lengths i get it working,
    > but with de fieldseparator 0X09 it does not work,
    > would you please tell me the complete FCC config not only the fieldSeparation line?
    Well Christian,
    There is one thing, eithere you can use fieldSeparator or fieldFixedLengths you can not use both together. So in your case do not use fieldSeparator.
    The complete FCC is almost same as given in the blog in my previous reply. See the final output in that blog and create your data type accordingly. e.g.
    ------Item 0...unbound
    --------Field1 0..1
    --------Field2 0..1
    --------Field3 0..1
    --------Field8 0..1

  • Unique issue with PDF to WORD .doc conversion with Acrobat Pro - any ideas?

    I have been unable to solve the following issue when converting (save as...) PDF documents to Microsoft Word .doc using numerous methods. This could either be an issue that would be fixed in Acrobat Pro itself, or in MS Word - posting to the Adobe forums first.
    PREFACE: I am attempting to use the converted .doc file with translation applications/software. Google Translator Toolkit is what I use the most, but ALL other translators are having this very same issue with the .doc file. --The source PDFs are product information from drug manufacturers in various countries that I need to have translated to English. I do not have access to their source documents, as they do not provide their own source docs for obvious reasons.
    ALSO: I cannot use Google Translator toolkit to translate from PDFs directly - if you do that, it will attempt to translate a PDF and then export in an .html file, but it does not get the exact spacing of the sentences correctly, which leads to errors in translating - key things such as "can take with alcohol" and "do not take with alcohol". So that's out!
    I am not having any problems with the resultant .doc file in MS Word itself. It looks right, the spacing matches the original PDF source perfectly, prints correctly, etc... Reference here on a product info sheet from Austria in German:
    The problem: This is a screenshot from Google Translator Toolkit - the right side of the image - the spacing in the lettering from the .doc file I am uploading is not being read correctly, resulting in untranslated gibberish. (Note: this isn't a problem with the translation applications or software -- all are having this issue with .doc files converted from .pdf - this issue isn't present with any old .doc file that wasn't converted from a .pdf) -- It's definitely got something to do with some kind of embedded data in the .doc file that I cannot isolate!!)
    My settings in Adobe Pro (convert from PDF to .doc):
    Page layout: Flowing Text (this prevents the resultant .doc from having all of those text boxes, which also don't then work in translators)
    Include comments: True
    Include images: True
    Run OCR if needed: True
    -I have run OCR text recognition on the source PDF files in it's specific language.
    -I have edited the accessibilty of the PDF and have run the tag recognition and quick checks (to see if they solved the issue, which it did not - tagged or untagged, same problems!)
    -I have exported the .doc BACK to PDF using MS Word's function, which results in a great looking tagged PDF. THEN I re-saved this new PDF back as a .doc - same issue.
    -I have tried saving the PDF in all of the other formats that the translators accept. All have different issues. The only one that works consistently is saving to a .txt (plain)... The best is a .doc to .doc conversion, with all the original spacing. (I am not spending hours reformatting a .txt translation in word)...
    I can't seem to find where this spacing data is in the .doc file!!!! (Changing the fonts, sizes, margins -- doesnt fix this either). I have tried so many methods...
    Any thoughts on other things to try in Adobe Pro (or Word)?
    EDIT: Here's an additional tidbit of info that may be the key to this... There's some kind of coding that is in the .doc that Adobe Pro converted from the source PDF that doesnt display in Word, but that is being seen by the translation programs....... I have no idea what these are, but I want to remove them!
    Message was edited by: KaotikADC

    I would suggest you look at the fonts that are being used. It may be a font issue that is not properly being read by the translation program.

  • File Adapter content conversion delimited/possitional file format.

    I have the following file to JDBC scenario, but having some issues with the file content conversion due to the file structure.
    000038A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    Example 2:
    000040A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049570=BN01 =BOMETLSS_ML_STD_30A7
    000041A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049571=BN01 =BOMETLSS_ML_STD_30A7
    000042A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    Example Explained:
    Position 1-9 is a "Transactional number".
    Position 10-11 is "Record type".
    Position 12-13 is "Line Item count".
    Four record types exist:
    03 = Location header
    01 = Transactional Header
    02 = Line Item
    OLRENDZZZZ = EoF marker.
    The equal sign "=" is a field separator/delimiter.
    In each delimited field, after the first equal sign in the record, the first two characters represent a field qualifier/field name tag/identifier and there only the data begins until the following delimiter.
    Each record is ended in a "CLRF"/'nl'.
    The file is build up, but not locked and only completed until the EoF marker "OLRENDZZZZ" is inserted by the application on the last record of the file.
    My solution so far:
    Record Structure: row,*
    Record Sequence: Ascending
    row.fieldNames: field1,field2,field3,ect.......
    row.fieldSeparator: =
    row.endSeparator: 'nl'
    row.keyFieldInStructure: ignore
    ignoreRecordsetName: true
    This brings the file into the integation server as xml as follow:
    <?xml version="1.0" encoding="utf-8"?>
    <ns:SAPtoFuelFACS xmlns:ns="urn:engenoil-com:i_fuel_facs_sap">
    So far, so good.
    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I have tried a script to rename files in msg pre-processing in the channel, but the problem is the file channel has to be triggered and the original file mask is necessary for this, but then this mask is a valid pickup file mask. So to me it seems the only way is to do this is during the content conversion process as the files not matching the file criteria, where a EoF "OLRENDZZZZ" definition is not present, will not be picked up and be ignored until it is present or totally independent with a batch job.
    If someone has a more elegant way to solve this problem with just using the file channel configuration where every thing is pretty much apparent, I would greatly appreciate it if you could assist.
    Willie Hugo

    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I suggest a script.
    Say The files are dropped in FolderA. Have a script transfer a file to FolderB only if it finds the EoF marker in a file. Thus FolderB will be what XI will poll and that will always have the complete file.
    Hope this sounds good!!!

  • XML-IDOC to Plain File: File Receiver Cnt Conversion Problem Nested Stucts

    Hi all,
    I have an IDOC-XI-File scenario and I have a problem with the file receiver adapter and the content conversion parameters when the final data type has nested structures. Imagine that I have something similar to the following:
    My desire is to get something similar to this in the output file:
    But what we are getting is this:
    The content conversion parameters of the file receiver are as follow:
    Recordset Structure: IDOC,EDI_DC40,E1STATS,Z1HDSTAT,Z1ITSTAT
    IDOC.fieldSeparator: ;
    IDOC. endSeparator: ‘nl’
    IDOC. addHeaderLine: 0
    EDI_DC40.fieldSeparator: ;
    EDI_DC40. endSeparator: ‘nl’
    EDI_DC40. addHeaderLine: 0
    E1STATS.fieldSeparator: ;
    E1STATS. endSeparator: ‘nl’
    E1STATS. addHeaderLine: 0
    Z1HDSTAT.fieldSeparator: ;
    Z1HDSTAT. endSeparator: ‘nl’
    Z1HDSTAT. addHeaderLine: 0
    Z1ITSTAT.fieldSeparator: ;
    Z1ITSTAT. endSeparator: ‘nl’
    Z1ITSTAT. addHeaderLine: 0
    I’ve tried to use the parameter beginSeparator=’nl’ for the segments Z1ITSTAT but it’s not working. I haven’t been able to find a solution in the other forums. Can anybody help me?
    Thanks in advance
    Roger Allué Vall

    Can you explain it with my example? I can't see what you mean.
    .<IDOC BEGIN="1">
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">

  • ...file Sender content conversion 'lastFieldsOptional'  error

    Hi All,
    I am working on a File Sender content conversion--> flat file to XI.
    it's fixed length field all the rows with same column.
    090627 344535AFDFG+GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090628 344536AFDFG+GBP45434 ASDSSD GFD dFSSGFD 6757532
    090628 344536AFDFG-GBP45434 ASDSSD GFD dFSSGFD 6757532
    I am sucessfully able to handle the file if i remove "**EOF*" from incoming file. But i get "lastfieldsOptional" error with "*EOF**"
         Recordset Structure - ROW,*
         Recordsets per message - *
        ROW.fieldFixedLengths - 6,8,20,10,30,1,3,1,11,1,11,6,10,10,20,10,10,2,6,6,2,6,6,3
        ROW.fieldNames - INV_DATE,INV_NO,PAYMENT_REF,CUST_NO,CUST_NAME...etc etc
       ignoreRecordsetName - true
    There are parameters available to ignore Last field(last column) of a Row/Rocerdset
    but how to ignore the last row of the file.
    Plz suggest some parameter to ignore the last row/field of the file.

    .lastFieldsOptional  is obselete and you cannot use. Please see sap help:
    The best thing comes to my mind is use:
    .keepIncompleteFields as YES
    Also give a try with:
    .missingLastfields as add
    So with this I think your last line in the file will read into XI. But the value of last line is **EOF**. So the length is 9. Since your the first two fixed lengths are 6 and 8. You will have these values as ***EOF and ***. So while mapping you can use doesnot satrt with * and map it, so that your last line will be ignored.
    Note: with this apporach you have to make sure your first two columns never start with *. If you have a doubt then you can use the condition doesnot equal to ***EOF for first field and doesnot equal to *** for the second field when you map. I hope it makes sense to you.

  • Issue with Sender File FCC

    Hi Experts,
    I have an issue with Sender File FCC Adapter. The file being picked is of type TXT and it is tab seperated. The first line contains the field names and from next line onwards we have values for those fields.
    The field names and field values are tab seperated. Even inserting a single letter in some field value manually disrupts the whole setup & alignment of the TXT file and the Sender File CC is unable to pick up the file from the shared folder. If the first file is errorenous and after that a correct TXT file is posted, it fails to pick up the correct file as it is trying to pick the errorenous file first.
    The Error thrown is :
    "Conversion of file content to XML failed at position 0: java.lang.Exception: ERROR converting document line no. 2 according to structure 'ABCD':java.lang.Exception: ERROR in configuration / structure 'ABCD.': More elements in file csv structure than field names specified!"
    I have two questions:
    1. Is there a way to handle such a scenario? For e.g., the errornous TXT file gets picked but throws error in PI.
    2. Is there an alternative that the sender FCC channel picks up the correct files and filter out the errorneous ones ? ?

    Hi Arkesh,
    I think you are passing more number of fields than expected. Please check paramters defined and send the data accordingly.
    In the processing parameters tab of sender file adapter, you have an option called Archive faulty source files, below to that you would have option to enter the " Directory for Archiving files with Errors".
    I hope this helps you....

  • File sender content conversion 0..unbounded does not pull file

    i am trying to map a flat file using content conversion to this xml structure:
    job_row, part_row and remark are records that occur 0..unbounded.
    when i write in recordset structure: Header,1,Claim_Header,1,Job_Row,1,Part_Row,1,Remark,1,Footer,1
    and put file with one record each, the file is pulled, but when i change recordset structure to:
    Header,1,Claim_Header,1,Job_Row,* ,Part_Row,* ,Remark,*,Footer,1
    the file is not pulled.
    can anyone tell me what i'm doing wrong?

    Refer this links for FCC.
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    File Receiver with Content Conversion
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    How to send a flat file with various field lengths and variable substructures to XI 3.0
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    File Content Conversion for Unequal Number of Columns
    Content Conversion ( The Key Field Problem )
    The specified item was not found.
    File Receiver with Content Conversion

  • File sender content conversion

    I have a txt file sender content conversion in SAP PI.
    I define in the content conversion 6 fields names(field1, field2, field3,field4,field5,field 6) but the fie that I am loading has only three field(field1,field2,field3). the file is still loading though there is a different in the structure between what I define in the PI to what is loading to the PI.
    is there a way to send an error with out loading the file?
    I read about the count function in the mapping. isnt there a build in function that suppose to load only the structure that is define?

    If you define 6 fields and only 3 are present in the test file then you wont get an error (AFAIK)
    However, if you define 3 fields and there are 6 present in the source file then you will get an error saying more number of parameters found.
    isnt there a build in function that suppose to load only the structure that is define?
    I dont think so....this may however be achieved using a custom adapter module.
    Just remembered that SAP PI (7.1 and above) comes with inbuilt XML-validation function....check if that helps in your design.
    Edited by: abhishek salvi on Dec 15, 2010 1:12 PM

  • Challenge in File Sender Content Conversion

    I have a real challenge concerning File Sender content conversion in SAP XI.
    My flat file looks like this:
    ##H   300
    MAR   206
    KAS 1
    DAT 01.03.08
    ART 1.129
    KUN 118
    EAN 4.499
    REL 5.0j 16.05.06
    SER             1
    ##E   300
    ##H   301
    DAT 01.03.08
    ZEI 07:54
    KAS 1
    ##E   301
    Each row represents a data field and has two values: The first one defines the field name, the second represents the field value. E.g. 'DAT' stands for Date and has the value 01.03.08 in the example.
    The fields belonging together in one data set are enclosed by a start qualiefier (##H)and an end qualifier (##E).
    The value after these qualifiers (i.e. '300' and '301' in the example above) represent a certain record type, e.g. '300' represents Customer data and '301' represents Account Data.
    Is it possible with file content conversion to create the following XML structure:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MT_DATA xmlns:ns0="">
              <REL>5.0j 16.05.06</REL>
    If it is not possible with content conversion, what could be an alternative? Adapter Module? MultiMapping?
    I'd really appreciate your input as I am working on this since several days without a solution.
    Thanks in advance.

    It is not possible thru adapter..
    There is one blog to convert such files to XML sing java mapping. Search for any flat file to Idoc or xml in the forum.

Maybe you are looking for

  • Is this a bug in ImageIO?  It gives inconsistent and nasty results

    Hi everyone Here's a stumper that I have been having trouble with and finally have a test case to illustrate it. First of all, the code: import java.awt.image.BufferedImage; import; import; import javax.imageio.ImageIO; publ

  • How do I clear all of the images in Contacts?

    A while back the facebook app allowed me to sync the photos with my contacts.  Because of this it now seems that the contacts app thinks they are all custom.  I need a way i wiping all of the images.  Even right now (I sync everything with iCloud), w

  • Problem with

    Hi. I''m having problem with playing a flv file while using seek to play it from three diferent positions. It works perfectly offline, but if I upload it to the web the seek comand doesn't work anymore. Every time i seek to a position it just plays r

  • HT4623 I upgraded to ios7 and now my phone won't restart

    I upgraded to ios7 and now my phone won't restart

  • How can I fix my iTunes? (Error -45054)

    After updating to the new OS Mavericks I have experienced this error for iTunes: -45054. After I click the Ok button to acknowledge the error iTunes closes. I tried to fix this by reintsalling iTunes, but this did not work. Any ideas on how to fix?