Issue with Xi File Receiver "Content Conversion" fixed length and kanji

I need to create a fixed length file for a customer that has kanji (SJIS) characters in it.  The issue is when the length of the value is less than the fixed length.  It is padding out with spaces, but it is putting 2 bytes per space instead of one (But it seems to have counted the padding in characters...).
.fieldFixedLengths: 8,6,40,40
.fixedLengthTooShortHandling: Cut
.fieldNames: date,time,name1,name2
Receiver, please don't answer for sender.
File Adapter

Paul,
Can you please tell what is coming in the output file if the values are:
date:  062309
time:   2240
name1: hello
name2: hello2
When you put space here and if it is not showing correctly just to interpret you can use S which will stand for space.
---Satish

Similar Messages

  • Complex file receiver content conversion

    Hi
    My challenge is to change this xml structure to a flat file structure using content conversion in a file receiver adapter. My problem is, that I have a record within a record and both records can occur multiple times:
    <statusUpdate_response>
       <ProcessID/>
       <SenderSystem/>
       <Results>
          <ResultList>  (0..unbound)
                <OKKode/>
                <Reference/>
                <Result/>
                <ResultDetail> (0..unbound)
                     <TYPE/>
                     <ID/>
                     <NUMBER/>
               </ResultDetail>
          </ResultList>
       </Results>
    </statusUpdate_response>
    Any suggestions?
    Maybe some sort of xml flattener before doing the content conversion would do the trick, but then again how is that to be done?
    BR MIkael

    a small trick might help - /people/shabarish.vijayakumar/blog/2010/01/14/file-conversion-using-nodeception
    Also do read - /people/shabarish.vijayakumar/blog/2007/08/03/file-adapter-receiver--are-we-really-sure-about-the-concepts

  • File receiver content conversion fields attributes

    hi,
    i am trying to use file adapter to write this xml to flat file but all attributes of the fields are not written to file ( only elements are written).
    the xml file is:
    <?xml version="1.0" encoding="utf-8"?>
    <n0:MT_Mits_Claims xmlns:prx="urn:sap.com:proxy:DE1:/1SAI/TAS6EAA2B8AB4A5D2DA4145:700:2008/06/25" xmlns:n0="http://colmobil.com/wty/1162/claims_to_manufacturer/mits/">
       <Header>
          <File_ID>GDM001</File_ID>
          <Dist_Code>KS</Dist_Code>
          <Header_Code>H1</Header_Code>
          <File_Descr>CLAIM RESULT DATA</File_Descr>
          <Total_Records>0000008</Total_Records>
          <Filler/>
          <Filename>GDM0010812100016.DAT</Filename>
       </Header>
       <Details>
          <Header_EA File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="EA" Claim_Type="P" Division="" RFC_Seq_No="" Filler=""/>
          <Header_HA File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Page_ID="01" Line_ID="HA" VIN="VIN260" Faliure_Date="200810" Odometer_Reading=" 1204" Sold_Date="080820"/>
          <Header_HB File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="HB" Position_Code="111221" A_Code="12" B_Code="03" C_Code="1" Ref_Code="" Manual_Control="" Comment_Code="" Ratio_Labor="" Ratio_Parts="" Manuf_Code="" Filler=""/>
          <Details_LA_LE File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="LA" Labor_Pos_Code="" Work_Code="99" Qty="10" Amount="" Ratio="" Comment_Code="" Factory="" Filler=""/>
          <Details_LA_LE File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="2110" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="LA" Labor_Pos_Code="231110" Work_Code="10" Qty="01" Amount="" Ratio="" Comment_Code="" Factory="" Filler=""/>
          <Details_PA_PK File_ID="GDM001" Dist_Code1="KS" Detail_Code="D1" Domestic_Export="E" Dist_Code2="KS" Dealer_Code="0211" Seq_No="0902" Sub_Seq_No="" Page_ID="01" Line_ID="PA" Parts_No="" Qty="01" Faliure_Origin="X" Price="0050000" Ratio="" Comment_Code="" Factory="" Filler=""/>
          <Total_Claim_Lines>6</Total_Claim_Lines>
       </Details>
       <Trailer>
          <File_ID>GEE</File_ID>
          <Dist_Code>KS</Dist_Code>
          <trailer_Code>E1</trailer_Code>
          <End_Code>END</End_Code>
          <Filler/>
       </Trailer>
    </n0:MT_Mits_Claims>
    the content conversion paramters are:
    Header.addHeaderLine 0
    Header.fieldFixedLengths 6,4,2,30,7,31,0
    Header.fixedLengthTooShortHandling Cut
    Details.addHeaderLine 0
    Details.fieldSeparator 'nl'
    Trailer.addHeaderLine 0
    Trailer.fieldFixedLengths 3,4,2,3,65
    the output file that i get is:
    GDM001KS  H1CLAIM RESULT DATA             0000008                              
    6
    GEEKS  E1END         
    What i need to do to get attributes in file?
    Thanks
    Tomer

    Hi
    Assuming the scenario you are trying to implement is XML file to Flat file. You need to map the input structure including attributes to the flat file structure. Then in your content conversion output the flat file structure.
    Thanks
    Damien

  • File Receiver Content Conversion

    I have the following XML:
    <n0:CostCentreServiceCreateRequest>
         <n0:ESB_Header>
              <MessageId>4E77E43F-1D1D-0096-E100-8000AC182411</MessageId>
              <Timestamp>2011-09-19T14:00:00Z</Timestamp>
              <SourceSystem>SAP</SourceSystem>
              <TargetSystem>MDS</TargetSystem>
              <ActionType>CostCentreServiceCreateRequest</ActionType>
         </n0:ESB_Header>
         <n0:CostCentre>
              <Record>
                   <Cost_Centre>0010100010</Cost_Centre>
                   <Valid_From_Date>1900-07-01</Valid_From_Date>
                   <Valid_To_Date>9999-12-31</Valid_To_Date>
                   <General_Name>Edu and Train Access</General_Name>
                   <Description>Education and Training Access</Description>
                   <Person_Responsible>Shala Karan</Person_Responsible>
                   <Department>ErlyChld Prg</Department>
                   <Cost_Center_Category>1</Cost_Center_Category>
                   <Company_Code>1010</Company_Code>
                   <Currency_Key>AUD</Currency_Key>
                   <Costing_Sheet>Z00001</Costing_Sheet>
              </Record>
         </n0:CostCentre>
    </n0:CostCentreServiceCreateRequest>
    I want to write it to a CSV file. However, I only want to write the values in the <Record> element and ignore everything else. Is this possible? At the moment, in my content conversion, I have Record entered in the Recordset Structure field. In the details below that I am setting the Record.addHeaderLine, Record.headerLine, Record.fieldSeparator, and Record.endSeparator attributes.
    However, what this generates is the following:
    CODE,NAME,vaild_to_date,vaild_from_date,description,person_responsible,department,category,company_code,currency,actual_primary_posting_locked_flag,actual_revenu_posting_locked_flag,costing_sheet
    4E77DA9E-8943-0099-E100-8000AC182411|2011-09-19T14:00:00Z|SAP|MDS|CostCentreServiceCreateRequest
    0010100010
    It includes the <n0:ESB_Header> values (which I dont want) and only the first value of <Record>.
    Ideally the output would look like this:
    CODE|NAME|vaild_to_date|vaild_from_date|description|person_responsible|department|category|company_code|currency|actual_primary_posting_locked_flag|actual_revenu_posting_locked_flag|costing_sheet
    0010100010|Edu and Train Access|1900-07-01|9999-12-31|Education and Training Access|Shala Karan|ErlyChld Prg|1|1010|AUD|Z00001
    Notice also that the Header is pipe delimited. Is there anyway to set this? So far I can only get the header to be comma delimited.
    Thanks,
    Krishneel

    I want to write it to a CSV file. However, I only want to write the values in the <Record> element and ignore everything else. Is this possible?
    Yes its possible. Rest all configuration seems to be fine. Refer the following link.
    http://help.sap.com/saphelp_nwpi71/helpdata/en/44/686e687f2a6d12e10000000a1553f6/frameset.htm
    Try to refresh cache as well. It might be taking old configuration.
    Regards
    Raj

  • HT6015 Issues with .vpptoken files how do I fix the following error?

    I'm probably being silly here but I can't seem to find out how to fix this issue, I have downloaded a .vpptoken to distribute app's to our networks iPads via OS X server for mavericks but receive the following error when trying to pull the .vpptoken into the OS X server application...
    Error Configuring VPP Managed Distribution.
    Unable to configure VPP managed distribution with the given token.
    This token was downloaded via the VPP program set in place by apple themselves and I'm now unable to use the app's purchased by us, I need to fix this ASAP or receive a full refund for the purchases so that I can re-buy them using redeemable codes and just use apple configurator, I was hoping to save myself some time doing this via the VPP token system obviously I was wrong.
    Best Regards
    ITtechnician

    For those who have also encountered this, it appears that apples servers were not supplying the correct .vpptoken file and since downloading a new .vpptoken today file I have been able to use vpp and the error has been resolved.

  • File Sender, Content Conversion - how to define variable length last field?

    XI 3.0 SP17
    With a File Sender communication channel, that uses Content Conversion - how do I define a 'variable length' last field?
    The scenario - the input file has four fields, of which the first three are a known fixed length, and the last (fourth, trailing) field is variable in length.
    Using a Message Protocol of 'File Content Conversion', how do I define that last variable length field (field name 'WOData' below) in the Content Conversion Parameters section?
    My current parameters are:
    Recordset Structure  -  Row,*
    ignoreRecordsetName  -  true
    Row.fieldFixedLengths  -  1,12,5,99999
    Row.fieldNames  -  WOType,WONum,WOLine,WOData
    I've tried the following for 'Row.fieldFixedLengths' to no avail -
    '1,12,5,*'
    '1,12,5,0'
    '1,12,5,'
    '1,12,5'
    The last two were grasping at straws )
    The only thing I've got to work is specifying a 'large' value for the final field (99999 above).
    In addition, does anyone know if specifying a large value (e.g. 99999) for the final trailing field will give rise to performance issues when the file is being processed?
    In the help for "Converting File Content in a Sender Adapter", it states -
    <Begin Quote>
    NameA.fieldFixedLengths
    If you make a specification here, the system expects a character string that contains the lengths of the structure columns as arguments separated by commas.
    If you also specify a separator for the columns, you must not add its length to the length of the columns.
    This entry is mandatory if you have not made an entry for NameA.fieldSeparator.
    <End Quote>
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm

    << note that fieldFixedLengths will not take any wildcard entries like *. So in these case it is ideal to provide a maximum char length.  But note that while the file is being created that many spaces will be created in your file !!! >>
    Hi Shabarish,
    Yes, no wildcard is the conclusion I came to, hence my maximum )
    The message size did not increase by any 'blank padding'.  When I look in [Message Display Tool (Detail Display)] 'Audit Log for Message: X'  -
    2006-10-17 18:22:42 Success Channel X: Entire file content converted to XML format
    2006-10-17 18:22:42 Success Send binary file  "X" from FTP server "X", size 103290 bytes with QoS EO
    2006-10-17 18:22:42 Success Application attempting to send an XI message asynchronously using connection AFW.
    2006-10-17 18:22:42 Success Trying to put the message into the send queue.
    2006-10-17 18:22:42 Success Message successfully put into the queue.
    2006-10-17 18:22:42 Success The application sent the message asynchronously using connection AFW. Returning to application.
    The input flat file in non-XML format was 92,132 bytes and the message payload into XI was 103,290 bytes.
    My understanding is that trailing spaces are stripped from XML nodes.

  • File Sender Content Conversion: Help needed

    Hello Experts,
    i need help with file sender content conversion:
    i have a file which looks like this:
    12329460  24.01.09/07:01  167     Y010122851  136086  43300007            E70115  1L2_96_1
    12329660  25.01.09/07:02  157     Y010122851  136086  43390007            E711J5  1L2_96_1
    as you can see 8 fields, separated with whitespaces
    and i want and xml file which looks like this:
    <DT_DATA_FILESENDER>
      <Recordset>
          <Data> 
            <field1>12329460</field1>
            <field2>24.01.09/07:01</field2>
            <field3>167</field3>    
            <field4>Y010122851</field4> 
            <field5>136086</field5> 
            <field6>43300007</field6>
            <field7>E70115</field7> 
            <field8>1L2_96_1</field8
         </Data>
          <Data> 
            <field1>12329660</field1>
            <field2>25.01.09/07:02</field2>
            <field3>157</field3>    
            <field4>Y010122851</field4> 
            <field5>136086</field5> 
            <field6>43390007</field6>
            <field7>E711J5</field7> 
            <field8>1L2_96_1</field8
         </Data>
      </Recordset>
    Would you please let me know how the datatype has to look like?
    And especially how the FCC has to be configured for this scenario?
    Thanks in advance,
    Chris

    > With fixed lengths i get it working,
    >
    > but with de fieldseparator 0X09 it does not work,
    > would you please tell me the complete FCC config not only the fieldSeparation line?
    Well Christian,
    There is one thing, eithere you can use fieldSeparator or fieldFixedLengths you can not use both together. So in your case do not use fieldSeparator.
    The complete FCC is almost same as given in the blog in my previous reply. See the final output in that blog and create your data type accordingly. e.g.
    Recordset
    ------Item 0...unbound
    --------Field1 0..1
    --------Field2 0..1
    --------Field3 0..1
    --------Field8 0..1
    Regards,
    Sarvesh

  • Unique issue with PDF to WORD .doc conversion with Acrobat Pro - any ideas?

    I have been unable to solve the following issue when converting (save as...) PDF documents to Microsoft Word .doc using numerous methods. This could either be an issue that would be fixed in Acrobat Pro itself, or in MS Word - posting to the Adobe forums first.
    PREFACE: I am attempting to use the converted .doc file with translation applications/software. Google Translator Toolkit is what I use the most, but ALL other translators are having this very same issue with the .doc file. --The source PDFs are product information from drug manufacturers in various countries that I need to have translated to English. I do not have access to their source documents, as they do not provide their own source docs for obvious reasons.
    ALSO: I cannot use Google Translator toolkit to translate from PDFs directly - if you do that, it will attempt to translate a PDF and then export in an .html file, but it does not get the exact spacing of the sentences correctly, which leads to errors in translating - key things such as "can take with alcohol" and "do not take with alcohol". So that's out!
    I am not having any problems with the resultant .doc file in MS Word itself. It looks right, the spacing matches the original PDF source perfectly, prints correctly, etc... Reference here on a product info sheet from Austria in German:
    The problem: This is a screenshot from Google Translator Toolkit - the right side of the image - the spacing in the lettering from the .doc file I am uploading is not being read correctly, resulting in untranslated gibberish. (Note: this isn't a problem with the translation applications or software -- all are having this issue with .doc files converted from .pdf - this issue isn't present with any old .doc file that wasn't converted from a .pdf) -- It's definitely got something to do with some kind of embedded data in the .doc file that I cannot isolate!!)
    My settings in Adobe Pro (convert from PDF to .doc):
    Page layout: Flowing Text (this prevents the resultant .doc from having all of those text boxes, which also don't then work in translators)
    Include comments: True
    Include images: True
    Run OCR if needed: True
    Notes:
    -I have run OCR text recognition on the source PDF files in it's specific language.
    -I have edited the accessibilty of the PDF and have run the tag recognition and quick checks (to see if they solved the issue, which it did not - tagged or untagged, same problems!)
    -I have exported the .doc BACK to PDF using MS Word's function, which results in a great looking tagged PDF. THEN I re-saved this new PDF back as a .doc - same issue.
    -I have tried saving the PDF in all of the other formats that the translators accept. All have different issues. The only one that works consistently is saving to a .txt (plain)... The best is a .doc to .doc conversion, with all the original spacing. (I am not spending hours reformatting a .txt translation in word)...
    I can't seem to find where this spacing data is in the .doc file!!!! (Changing the fonts, sizes, margins -- doesnt fix this either). I have tried so many methods...
    Any thoughts on other things to try in Adobe Pro (or Word)?
    EDIT: Here's an additional tidbit of info that may be the key to this... There's some kind of coding that is in the .doc that Adobe Pro converted from the source PDF that doesnt display in Word, but that is being seen by the translation programs....... I have no idea what these are, but I want to remove them!
    Message was edited by: KaotikADC

    I would suggest you look at the fonts that are being used. It may be a font issue that is not properly being read by the translation program.

  • File Adapter content conversion delimited/possitional file format.

    Hi,
    I have the following file to JDBC scenario, but having some issues with the file content conversion due to the file structure.
    Example:
    =======
    000038A020301
    000038A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    000038A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992427=AH10
    OLRENDZZZZ
    Example 2:
    ========
    000040A020301
    000040A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049570=BN01 =BOMETLSS_ML_STD_30A7
    000040A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992425=AH10
    000041A020301
    000041A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049571=BN01 =BOMETLSS_ML_STD_30A7
    000041A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992426=AH10
    000042A020301
    000042A020101=AA1=AC1=AD=AG1=AH1=AI1=AK3049572=BN01 =BOMETLSS_ML_STD_30A7
    000042A020200=AA96=AB001=AC17000.000=AD1200=AF13021537=AE=AG8005992427=AH10
    000043A020301
    000043A020104=AA1=AC1=AD200619=AG1=AH1=AI1=AK3049568=BN01
    000043A020200=AA73=AB001=AC3700.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB002=AC5500.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB003=AC1800.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000043A020200=AA73=AB004=AC5000.000=AD1300=AF13047285=AE200619=AG8005992423=AH10
    000044A020301
    000044A020104=AA1=AC1=AD200619=AG1=AH1=AI1=AK3049569=BN01
    000044A020200=AA73=AB001=AC3700.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB002=AC5500.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB003=AC2500.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    000044A020200=AA73=AB004=AC5000.000=AD1300=AF10008536=AE200619=AG8005992424=AH10
    OLRENDZZZZ
    Example Explained:
    ==============
    Position 1-9 is a "Transactional number".
    Position 10-11 is "Record type".
    Position 12-13 is "Line Item count".
    Four record types exist:
    03 = Location header
    01 = Transactional Header
    02 = Line Item
    OLRENDZZZZ = EoF marker.
    The equal sign "=" is a field separator/delimiter.
    In each delimited field, after the first equal sign in the record, the first two characters represent a field qualifier/field name tag/identifier and there only the data begins until the following delimiter.
    Each record is ended in a "CLRF"/'nl'.
    The file is build up, but not locked and only completed until the EoF marker "OLRENDZZZZ" is inserted by the application on the last record of the file.
    My solution so far:
    =============
    Record Structure: row,*
    Record Sequence: Ascending
    row.fieldNames: field1,field2,field3,ect.......
    row.fieldSeparator: =
    row.endSeparator: 'nl'
    row.keyFieldInStructure: ignore
    ignoreRecordsetName: true
    This brings the file into the integation server as xml as follow:
    ============================================
    <?xml version="1.0" encoding="utf-8"?>
    <ns:SAPtoFuelFACS xmlns:ns="urn:engenoil-com:i_fuel_facs_sap">
         <row>
              <field1>000038A020301</field1>
         </row>
         <row>
              <field1>000038A020101</field1>
              <field2>AA1</field2>
              <field3>AC1</field3>
              <field4>AD</field4>
              <field5>AG1</field5>
              <field6>AH1</field6>
              <field7>AI1</field7>
              <field8>AK3049572</field8>
              <field9>BN01</field9>
              <field10>BOMETLSS_ML_STD_30A7</field10>
              <field11>BP0003049572</field11>
         </row>
         <row>
              <field1>000038A020200</field1>
              <field2>AA96</field2>
              <field3>AB001</field3>
              <field4>AC17000.000</field4>
              <field5>AD1200</field5>
              <field6>AF13021537</field6>
              <field7>AE</field7>
              <field8>AG8005992427</field8>
              <field9>AH10</field9>
         </row>
         <row>
              <field1>OLRENDZZZZ</field1>
         </row>
    </ns:SAPtoFuelFACS>
    So far, so good.
    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I have tried a script to rename files in msg pre-processing in the channel, but the problem is the file channel has to be triggered and the original file mask is necessary for this, but then this mask is a valid pickup file mask. So to me it seems the only way is to do this is during the content conversion process as the files not matching the file criteria, where a EoF "OLRENDZZZZ" definition is not present, will not be picked up and be ignored until it is present or totally independent with a batch job.
    If someone has a more elegant way to solve this problem with just using the file channel configuration where every thing is pretty much apparent, I would greatly appreciate it if you could assist.
    Regards
    Willie Hugo

    The problem I am having is that I have to check for the EoF marker "OLRENDZZZZ" to be present before picking up the file, else the file is not completed.
    I suggest a script.
    Say The files are dropped in FolderA. Have a script transfer a file to FolderB only if it finds the EoF marker in a file. Thus FolderB will be what XI will poll and that will always have the complete file.
    Hope this sounds good!!!

  • XML-IDOC to Plain File: File Receiver Cnt Conversion Problem Nested Stucts

    Hi all,
    I have an IDOC-XI-File scenario and I have a problem with the file receiver adapter and the content conversion parameters when the final data type has nested structures. Imagine that I have something similar to the following:
    My desire is to get something similar to this in the output file:
    SEGMENT0;HEADER
    SEGMENT1;100
    SEGMENT2;0200000716
    SEGMENT3;1000
    SEGMENT2;0200000717
    SEGMENT3;1000
    SEGMENT3;1001
    SEGMENT3;1002
    But what we are getting is this:
    SEGMENT0;HEADER
    SEGMENT1;100
    SEGMENT2;0200000716;SEGMENT3;1000
    SEGMENT2;0200000717;SEGMENT3;1000;SEGMENT3;1001;SEGMENT3;1002
    The content conversion parameters of the file receiver are as follow:
    Recordset Structure: IDOC,EDI_DC40,E1STATS,Z1HDSTAT,Z1ITSTAT
    IDOC.fieldSeparator: ;
    IDOC. endSeparator: ‘nl’
    IDOC. addHeaderLine: 0
    EDI_DC40.fieldSeparator: ;
    EDI_DC40. endSeparator: ‘nl’
    EDI_DC40. addHeaderLine: 0
    E1STATS.fieldSeparator: ;
    E1STATS. endSeparator: ‘nl’
    E1STATS. addHeaderLine: 0
    Z1HDSTAT.fieldSeparator: ;
    Z1HDSTAT. endSeparator: ‘nl’
    Z1HDSTAT. addHeaderLine: 0
    Z1ITSTAT.fieldSeparator: ;
    Z1ITSTAT. endSeparator: ‘nl’
    Z1ITSTAT. addHeaderLine: 0
    I’ve tried to use the parameter beginSeparator=’nl’ for the segments Z1ITSTAT but it’s not working. I haven’t been able to find a solution in the other forums. Can anybody help me?
    Thanks in advance
    Roger Allué Vall

    Can you explain it with my example? I can't see what you mean.
    <ZSYSEX01>
    .<IDOC BEGIN="1">
    ....<EDI_DC40 SEGMENT="SEGMENT0">
    ........<FIELD1>HEADER</FIELD1>
    ....</EDI_DC40>
    ....<E1STATS SEGMENT="SEGMENT1">
    ........<FIELD2>100</MANDT>
    ........<Z1HDSTAT SEGMENT="SEGMENT2">
    ...........<FIELD3>0200000716</FIELD3>
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...............<FIELD4>1000</FIELD4>
    ...........</Z1ITSTAT>
    ........</Z1HDSTAT>
    ........<Z1HDSTAT SEGMENT="SEGMENT2">
    ...........<FIELD3>0200000717</FIELD3>
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...............<FIELD4>1000</FIELD4>
    ...........</Z1ITSTAT>
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...............<FIELD4>1001</FIELD4>
    ...........</Z1ITSTAT>
    ...........<Z1ITSTAT SEGMENT="SEGMENT3">
    ...............<FIELD4>1002</FIELD4>
    ...........</Z1ITSTAT>
    ........</Z1HDSTAT>
    ....</E1STATS>
    .</IDOC>
    </ZSYSEX01>
    Regards,

  • ...file Sender content conversion 'lastFieldsOptional'  error

    Hi All,
    I am working on a File Sender content conversion--> flat file to XI.
    it's fixed length field all the rows with same column.
    090627 344535AFDFG+GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090627 344535AFDFG-GBP65433 ASDSSD GFD dFSSGFD 6757532
    090628 344536AFDFG+GBP45434 ASDSSD GFD dFSSGFD 6757532
    090628 344536AFDFG-GBP45434 ASDSSD GFD dFSSGFD 6757532
    **EOF**
    I am sucessfully able to handle the file if i remove "**EOF*" from incoming file. But i get "lastfieldsOptional" error with "*EOF**"
    >
    Parameters-
         Recordset Structure - ROW,*
         Recordsets per message - *
        ROW.fieldFixedLengths - 6,8,20,10,30,1,3,1,11,1,11,6,10,10,20,10,10,2,6,6,2,6,6,3
        ROW.fieldNames - INV_DATE,INV_NO,PAYMENT_REF,CUST_NO,CUST_NAME...etc etc
       ignoreRecordsetName - true
    There are parameters available to ignore Last field(last column) of a Row/Rocerdset
    but how to ignore the last row of the file.
    Plz suggest some parameter to ignore the last row/field of the file.
    Regs,
    Ansh

    Ansh,
    .lastFieldsOptional  is obselete and you cannot use. Please see sap help:
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    The best thing comes to my mind is use:
    .keepIncompleteFields as YES
    Also give a try with:
    .missingLastfields as add
    So with this I think your last line in the file will read into XI. But the value of last line is **EOF**. So the length is 9. Since your the first two fixed lengths are 6 and 8. You will have these values as ***EOF and ***. So while mapping you can use doesnot satrt with * and map it, so that your last line will be ignored.
    Note: with this apporach you have to make sure your first two columns never start with *. If you have a doubt then you can use the condition doesnot equal to ***EOF for first field and doesnot equal to *** for the second field when you map. I hope it makes sense to you.
    Regards,
    ---Satish

  • Issue with Sender File FCC

    Hi Experts,
    I have an issue with Sender File FCC Adapter. The file being picked is of type TXT and it is tab seperated. The first line contains the field names and from next line onwards we have values for those fields.
    The field names and field values are tab seperated. Even inserting a single letter in some field value manually disrupts the whole setup & alignment of the TXT file and the Sender File CC is unable to pick up the file from the shared folder. If the first file is errorenous and after that a correct TXT file is posted, it fails to pick up the correct file as it is trying to pick the errorenous file first.
    The Error thrown is :
    "Conversion of file content to XML failed at position 0: java.lang.Exception: ERROR converting document line no. 2 according to structure 'ABCD':java.lang.Exception: ERROR in configuration / structure 'ABCD.': More elements in file csv structure than field names specified!"
    I have two questions:
    1. Is there a way to handle such a scenario? For e.g., the errornous TXT file gets picked but throws error in PI.
    2. Is there an alternative that the sender FCC channel picks up the correct files and filter out the errorneous ones ? ?
    Thanks,
    Arkesh

    Hi Arkesh,
    I think you are passing more number of fields than expected. Please check paramters defined and send the data accordingly.
    In the processing parameters tab of sender file adapter, you have an option called Archive faulty source files, below to that you would have option to enter the " Directory for Archiving files with Errors".
    I hope this helps you....
    Thanks,

  • File sender content conversion 0..unbounded does not pull file

    i am trying to map a flat file using content conversion to this xml structure:
    <Header>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <File_name/>
    <File_date/>
    <File_time/>
    <Heb_code/>
    <Claim_no/>
    <Cont_no/>
    <Remark/>
    </Header>
    <Claim_Header>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <Vendor_code/>
    <Garage_no/>
    <Data_code/>
    <Year/>
    <Job_no/>
    <Cont_no/>
    <License_no/>
    <ODO_meter/>
    <Claim_type/>
    <VIN_code/>
    <Claim_open_date/>
    <Claim_fix_date/>
    <N_code/>
    <C_code/>
    <Ejob1/>
    <Ejob2/>
    <Page_no/>
    <Pre_conf_date/>
    <Pre_conf_no/>
    <Part_buy_date/>
    <KM_prev_fix/>
    <Bill_prev_fix/>
    <Material_fault/>
    <Damage_whole/>
    <Damage_code/>
    <Job_amount/>
    <Towing_amount/>
    <Parts_amount/>
    <Hour_rate/>
    <Cont_mark/>
    </Claim_Header>
    <job_row>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <Vendor_code/>
    <Garage_no/>
    <Data_code/>
    <Year/>
    <Job_no/>
    <Cont_no/>
    <page_no/>
    <line_no/>
    <shaaton_code/>
    <job_finish_code/>
    <amount/>
    <job_time/>
    <row_value/>
    <color_code/>
    </job_row>
    <part_row>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <Vendor_code/>
    <Garage_no/>
    <Data_code/>
    <Year/>
    <Job_no/>
    <Cont_no/>
    <page_no/>
    <line_no/>
    <material_code/>
    <amount/>
    <unit_price/>
    <row_value/>
    </part_row>
    <remark>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <Vendor_code/>
    <Garage_no/>
    <Data_code/>
    <Year/>
    <Job_no/>
    <Cont_no/>
    <page_no/>
    <line_no/>
    <remark/>
    </remark>
    <footer>
    <Field_ID/>
    <Filler/>
    <Record_type/>
    <File_name/>
    <File_Record_no/>
    <Claim_no/>
    <Cont_no/>
    <Remark/>
    </footer>
    job_row, part_row and remark are records that occur 0..unbounded.
    when i write in recordset structure: Header,1,Claim_Header,1,Job_Row,1,Part_Row,1,Remark,1,Footer,1
    and put file with one record each, the file is pulled, but when i change recordset structure to:
    Header,1,Claim_Header,1,Job_Row,* ,Part_Row,* ,Remark,*,Footer,1
    the file is not pulled.
    can anyone tell me what i'm doing wrong?
    thanx
    Tomer

    Hi,
    Refer this links for FCC.
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    File Receiver with Content Conversion
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    How to send a flat file with various field lengths and variable substructures to XI 3.0
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    File Content Conversion for Unequal Number of Columns
    Content Conversion ( The Key Field Problem )
    The specified item was not found.
    File Receiver with Content Conversion
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
    Regards,
    Phani

  • File sender content conversion

    hello
    I have a txt file sender content conversion in SAP PI.
    I define in the content conversion 6 fields names(field1, field2, field3,field4,field5,field 6) but the fie that I am loading has only three field(field1,field2,field3). the file is still loading though there is a different in the structure between what I define in the PI to what is loading to the PI.
    is there a way to send an error with out loading the file?
    I read about the count function in the mapping. isnt there a build in function that suppose to load only the structure that is define?
    Thanks
    Kfir

    If you define 6 fields and only 3 are present in the test file then you wont get an error (AFAIK)
    However, if you define 3 fields and there are 6 present in the source file then you will get an error saying more number of parameters found.
    isnt there a build in function that suppose to load only the structure that is define?
    I dont think so....this may however be achieved using a custom adapter module.
    Update:
    Just remembered that SAP PI (7.1 and above) comes with inbuilt XML-validation function....check if that helps in your design.
    Regards,
    Abhishek.
    Edited by: abhishek salvi on Dec 15, 2010 1:12 PM

  • Challenge in File Sender Content Conversion

    Hi,
    I have a real challenge concerning File Sender content conversion in SAP XI.
    My flat file looks like this:
    ##H   300
    MAR   206
    KAS 1
    DAT 01.03.08
    ART 1.129
    KUN 118
    EAN 4.499
    REL 5.0j 16.05.06
    SER             1
    ##E   300
    ##H   301
    DAT 01.03.08
    ZEI 07:54
    KAS 1
    ##E   301
    Each row represents a data field and has two values: The first one defines the field name, the second represents the field value. E.g. 'DAT' stands for Date and has the value 01.03.08 in the example.
    The fields belonging together in one data set are enclosed by a start qualiefier (##H)and an end qualifier (##E).
    The value after these qualifiers (i.e. '300' and '301' in the example above) represent a certain record type, e.g. '300' represents Customer data and '301' represents Account Data.
    Is it possible with file content conversion to create the following XML structure:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MT_DATA xmlns:ns0="http://sap.com/xi/account">
         <300>
              <MAR>206</MAR>
              <KAS>1</KAS>
              <DAT>01.03.08</DAT>
              <ART>1.129</ART>
              <KUN>118</KUN>
              <EAN>4.499</EAN>
              <REL>5.0j 16.05.06</REL>
              <SER>1</SER>
         </300>
         <301>
              <DAT>01.03.08</DAT>
              <ZEI>07:54</ZEI>     
              <KAS>1</KAS>
         </301>
    </ns0:MT_DATA>
    If it is not possible with content conversion, what could be an alternative? Adapter Module? MultiMapping?
    I'd really appreciate your input as I am working on this since several days without a solution.
    Thanks in advance.
    Alex

    It is not possible thru adapter..
    There is one blog to convert such files to XML sing java mapping. Search for any flat file to Idoc or xml in the forum.
    VJ

Maybe you are looking for

  • GNOME is mostly non-functional (no UI images)

    I've run into some trouble setting up gnome.  It looks like no UI images are actually loading.  the GDM login screen comes up as a black rectangle with the four corners  missing.  While I can't read the actual text, I can log in and get to my desktop

  • Problem in creating transaction variant

    Hi all,         I am trying to create transaction variant for F- 41 transaction . it's giving an error message . Variants are possible only for dialog transactions. Message no. MS417 Diagnosis You cannot create variants for variant or parameter trans

  • Can OS X Lion be used without a wireless connection?

    I have two wired networked Mac Minis with track balls. Can't figure out what I would need to upgrade or change to use Lion. Would I need to go wireless?

  • Cross tab report-dyanamic columns for months and quarterly sum

    Hi all, I work on report creation in BI Publisher.I need to display values in a cross tab report in a way that it shows data for 3 months and then a column for its quarterly sum. For ex:- Market --Jan       Feb    Mar    Q1_sum Apr May Jun Q2_sum ---

  • How to: Create same size video frames?

    I like to use the effect where I have two different videos that are split on the screen. Example: a shot of the bride and groom walking down the stairs from the church and then another shot of people greeting them. I like to have these two shots shar