Sender File (FCC) - Content of File into single XML Tag

Hi,
Input file
This is Line1
This is Line2
Expected Sender File adapter FCC into XML
<Document>
.<File>
..<Content> This is line 1 This is line2</content>
.</File>
</Document>
FCC is giving the following output with the below config,
File.fieldNames = Content
File.fieldSeparator = '0x1A' (HexaDecimal rep for EndOfFile)
<Document>
.<File>
..<Content>This is line 1</content>
..<Content>This is line 2</content>
.</File>
</Document>
How can i read the complete file into a single XML tag?
I am aware of other options (AdapterModules or Javamapping). But i want to keep it simple to FileAdapter using FCC or using MessageTransformBean (if possible)
-SM

FCC donse't work here, So i wrote a simple Java map inside Execute method to convert the content to the required format.
  public void execute(InputStream in, OutputStream out)  throws StreamTransformationException{
StringBuilder sb = new StringBuilder();
       try {
       BufferedReader reader = new BufferedReader(new InputStreamReader(in, "UTF-8"));
       while ((line = reader.readLine()) != null) {
           sb.append(line).append("\r\n");
       } finally {
       in.close();
After i had the input string, i formatted it to get the required output as XML ...
<Document>
.<File>
..<Content> This is line 1 This is line2</content>
.</File>
</Document>

Similar Messages

  • Merging two xml files into single xml file

    i hav to merge 2 xml files in to single xml file
    my xml files are
    input1.xml
    <?xml version="1.0"?>
    <PreVCD>
    <component name="stack">
    <subpath path="stack_environment">
    <variable var="ins" symbol="!" wireonbus="1"/>
    </subpath>
    </component>
    <dump>
    <time t="0">
    <data>
    <symbol sign="!" value="0"/>
    </data>
    </time>
    <time t="10">
    <data>
    <symbol sign="!" value="1"/>
    </data>
    </time>
    <time t="25">
    <data>
    <symbol sign="!" value="0"/>
    </data>
    </time>
    </dump>
    </PreVCD>
    input2.xml
    <?xml version="1.0"?>
    <PreVCD>
    <component name="stack">
    <subpath path="stack_behavior">
    <variable var="i" symbol="@" bussize="1"/>
    </subpath>
    </component>
    <dump>
    <time t="0">
    <data>
    <symbol sign="@" value="0"/>
    </data>
    </time>
    <time t="5">
    <data>
    <symbol sign="@" value="1"/>
    </data>
    </time>
    <time t="10">
    <data>
    <symbol sign="@" value="0"/>
    </data>
    </time>
    <time t="20">
    <data>
    <symbol sign="@" value="1"/>
    </data>
    </time>
    </dump>
    </PreVCD>
    The ouput should look like:
    <PreVCD>
    <component name="stack">
    <subpath path="stack_behavior">
    <variable var="i" symbol="@" bussize="1"/>
    </subpath>
    <subpath path="stack_environment">
    <variable var="ins" symbol="!" wireonbus="1"/>
    </subpath>
    </component>
    <dump>
    <time t="0">
    <data>
    <symbol sign="@" value="0"/>
    <symbol sign="!" value="0"/>
    </data>
    </time>
    <time t="5">
    <data>
    <symbol sign="@" value="1"/>
    </data>
    </time>
    <time t="10">
    <data>
    <symbol sign="@" value="0"/>
    <symbol sign="!" value="1"/>
    </data>
    </time>
    <time t="20">
    <data>
    <symbol sign="@" value="1"/>
    </data>
    </time>
    <time t="25">
    <data>
    <symbol sign="!" value="0"/>
    </data>
    </time>
    </dump>
    </PreVCD>
    thanks for any advice

    Merge xml documents with the xslt document() function.
    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet
    version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method="xml" />
    <xsl:template match="/">
    <xsl:copy-of select="*"/>
    <xsl:copy-of select="document('input2.xml')"/>
    </xsl:template>
    </xsl:stylesheet>

  • Compare two text files in Powershell and if a name is found in both files output content from file 2 to a 3rd text file

    Is it possible using PowerShell to compare the contents of two text files line by line and if a line is found output that line to a third text file?
    Lets say hypothetically someone asks us to search a text file named names1.txt and when a name is found in names1.txt we then pair that with the same name in the second text file called names2.txt
    lets say the names shown below are in names1.txt
    Bob
    Mike
    George
    Lets say the names and contents shown below are in names2.txt
    Lisa
    Jordan
    Mike 1112222
    Bob 8675309
    Don
    Joe
    Lets say we want names3.txt to contain the data shown below
    Mike 1112222
    Bob 8675309
    In vbscript I used search and replace commands to get part of the way there like this
    Set objFSO = CreateObject("Scripting.FileSystemObject")
    Set objFile = objFSO.OpenTextFile("testing.txt", ForReading)
    strText = objFile.ReadAll
    objFile.Close
    strNewText = Replace(strText, "Mike ", "Mike 1112222")
    Set objFile = objFSO.OpenTextFile("testing.txt", ForWriting)
    objFile.WriteLine strNewText
    objFile.Close
    That script works great when you know the name you are looking for and the correct values. Lets say someone gives you a list of 1000 employees and says import these names into a list in the correct format and one sheet has the correct names only and
    the other sheet has lots of extra names say 200000 and you only need the 1000 you are looking for in the format from names2.txt.

    Sure,
    Here's a simple one:
    $names1 = "C:\names1.txt"
    $names2 = "C:\names2.txt"
    $names3 = "C:\names3.txt"
    Get-Content $names1 | ForEach-Object {
    $names1_Line = $_
    Get-Content $names2 | Where-Object {$_.Contains($names1_Line)} | Out-File -FilePath $names3 -Append
    This basically just reads $names1 file, line by line, and then read $names2 file line by line as well.
    If the line being evaluated from $names2 file contains the line being evaluated from $names1 file, then the line from $names2 file gets output to $names3 file, appending to what's already there.
    This might need a few more tinkering to get it to perform faster etc depending on your requirements. For example:
    - If either $names1 or $names2 contain a lot of entries (in the region of hundreds) then it will be faster to load the whole content of $names2 into memory rather than opening the file, reading line by line, closing and then doing the same for every single
    line in $names1 (which is how it is currently works)
    - Make sure that your comparison is behaving as expected. The .Contains method always does a case sensitive comparison, this might not be what you are after.
    - You might want to put a condition to ignore blank lines or lines with spaces, else they'll also be brought over to $names3
    Hopefully this will get you started though and ask if you have further questions.
    Fausto

  • Portlet backing file vs content backing file

    Hi,
    could someone explain the differences between a "portlet backing file" and a "content backing file". Both are properties available on a portlet.
    Thx
    Emmanuel

    According to the documentation, here is the difference:
    Scoping and Backing Files
    The difference between having a backing file as part of <netuix: portlet backingfile =some_value> or part of <netuix: jspContent backingfile=some_value> is related to scoping.
    For example, if you have the backing file on the portlet itself, you can actually stop the portlet from rendering. If the backing file is at the jspContent level, the portlet portion of the control tree has already run; you use this implementation to run processes that are specifically for the JSP in the portlet.
    See http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/portlets/building.html#wp1077130 for more info.
    Brad

  • Alert Content for text without any XML tag

    Hi All,
    For the XML workflow,
    Need a alert content or select content for the text without any xml tag.
    Please help for my XML job request.
    Regards
    Siraj

    Hi All,
    New to XML, just learning stage only.
    Already I asked and got the coding through forum. It alert text frame ID only.
    But, guide me how to get the
    1. alert or select the contents.
    2. alert the page number or select text frame etc.
    Coding is below:
    #target InDesign
    #include "/Applications/Adobe InDesign CS6/Scripts/Xml Rules/glue code.jsx"
    main();
    exit();
    function main() {
              var myDoc = app.activeDocument;
              var myStories = myDoc.stories.everyItem().getElements();
              var idArray = new Array(0);
              var storyArray = new Array(0);
              // store all stories of the document in an array
              for(var j=0; j < myStories.length; j++) {
                        storyArray.push(myStories[j].id);
    // *** here start the XML-Rules ******************************//
              var elements = myDoc.xmlElements;
              var myRuleSet = new Array(
                        alleObj = new all(idArray)
              __processRuleSet(elements.item(0), myRuleSet);
    // *** here stop the XML-Rules *******************************//
              for (var i=0; i < idArray.length; i++) {
                        for (var k=0; k < storyArray.length; k++) {
                                  if (idArray[i] == storyArray[k]) {
                                            storyArray[k] = -1;
              for (var i=0; i < storyArray.length; i++) {
                        if (storyArray[i] > 0) {
                                  alert(storyArray[i]);
                        }          // if …
              }          // for()
    }          // main()
    // XML-Rule
    function all(idArray) {
              this.name = "all";
              this.xpath = "//*";
              this.apply = function(myElement, myRuleProcessor) {
                var storyId = myElement.parentStory.id;
                        var i=0, found = false;
                        while ((i < idArray.length) && (!found)) {
                                  if (idArray[i++] == storyId) found=true;
                        // store anly the number of the stories which are not stored up to now
                        if (!found) idArray.push(storyId);
                return true;
              }          // this.apply
    }          // all()
    Regards
    Siraj

  • Search multiple folders for files with same name and create single file

     I have a project where I need to search multiple folders for a file name and when found append data from each file to a single input file.
     Example
    root folder to start search
    \\servera\sales
    \\servera\it\salesa\cmmstr.txt
    \\servera\it\salesb\cmmstr.txt
    \\servera\it\salesc\cmmstr.txt
     I need to create a a single cmmstr.txt on the root folder. I would like it to be able to run this with parms to pass in folders to search and file names to search and single file name to create. I'm going to have a least 10 differnt files to
    search for and create output file for. The folders to search
    will somewhat be static.
     Thanks.

    I tested this out on my own seat and I think it should work for you. I wrote it as a function, all you have to do is pass the root folders you want to search and the file your looking for. The function will then search that directory and all sub directories
    for that file name. you will also have to provide it a file to append to, if the file dosen't exists the function will create it. If you run into an issues let me know and the links Mike
    Laughlin posted are a great resource.
    Function Search-Files{
    Param([String[]]$Locations, $SearchFor, $AppendTo)
    Begin
    If(-Not (Test-Path $AppendTo)){New-Item $AppendTo -ItemType File -Force}
    Process
    ForEach($Location in $Locations)
    $Files = Get-ChildItem -Path $Location -Filter $SearchFor -Recurse
    ForEach($File in $Files)
    Get-Content -Path $FIles.FullName | Out-File $AppendTo -Append
    End{}
    Search-Files -Locations "\\Server1\c$\Temp", "\\Server1\c$\Test1" -SearchFor "Install.cmd" -AppendTo "C:\Temp\Search.log"

  • Font index file's content

    In FontFactory's reference, "dvb.fontindex" has been brought in. For implementing of FontFactory, I just make a test program to parse the file "dvb.fontindex". The file's content is like this:
    <?xml version="1.0"?>
    <!DOCTYPE fontdirectory PUBLIC "-//DVB//DTD Font Directory 1.0//EN"
    "http://www.dvb.org/mhp/dtd/fontdirectory-1-0.dtd">
    <fontdirectory>
      <font>
        <name>ExampleFontName</name>
        <fontformat>OTF</fontformat>
        <filename>00000.otf</filename>
        <style>BOLD</style>
      </font>
    </fontdirectory>But I got the following exception.
    *** Nested Exception:
    java.io.FileNotFoundException: http://www.dvb.org/mhp/dtd/fontdirectory-1-0.dtd
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
         at java.net.URL.openStream(Unknown Source)
         at net.n3.nanoxml.StdXMLReader.openStream(Unknown Source)
         at net.n3.nanoxml.StdXMLParser.processDocType(Unknown Source)
         at net.n3.nanoxml.StdXMLParser.processSpecialTag(Unknown Source)
         at net.n3.nanoxml.StdXMLParser.scanSomeTag(Unknown Source)
         at net.n3.nanoxml.StdXMLParser.scanData(Unknown Source)
         at net.n3.nanoxml.StdXMLParser.parse(Unknown Source)From the message, it seems that error occured by "http://www.dvb.org/mhp/dtd/fontdirectory-1-0.dtd" 's unexistence.
    Is the dtd file really deleted from MHP specification?
    Appreciate for your help.

    Thanks for replying.
    The problem has been resolved.
    The problem occured during implementation of FonFactory.
    When testint for font index file's parsing, the problem occured.
    I override the StdXmlReader, add the publicID to it, then the problem had been resolved.

  • Save an Image object into a XML text document

    ello:
    I have a problem with saving an image. I need to encapsulate it into two XML tags:
    <Image> xxxxxx </Image>
    The "xxxx" must be a String that represents my Image object.
    In J2ME I haven't serialization, and all the examples about "manual serialization" works with primitive types :(
    I've already read that I can do .getRGB() over my Image object, and obtain an array of int, wich represent each pixel. Ok, I can transform the array into a String, and write it in my XML text document, but later: how I obtain my array from that String?
    Thak you very much

    private Image img;
    byte rgb[] = null;
    private rgbLength; // get lenth of your string
    rgb = new byte(rgbLength);
    rgb = ... //load data from string to this array
    img = createRGBImage(rgb, int width, int height, false) ;You must know length of your string and must encapsulate in to XML width and height of this image;

  • Read contents of file into outputstream and send through socket

    I have a file. Instead of transferring the whole file through socket to the destination, I will read the contents from the file (big or small file size) into outputstream and send them to the destination where the client will receive the data and directly display it....
    Can you suggest any efficient way/methods to achieve that?
    Thanks.

    I don' t understand what you think the difference is between those two techniques, but:
    int count;
    byte[] buffer = new byte[16384];
    while ((count = in.read(buffer)) > 0)
      out.write(buffer, 0, count);
    out.close();
    in.close();

  • How to save contents of two different rich edit box into single rtf file in windows 8.1 app

    Developer, I have requirement to save registration data into rtf file.. These can only be done by if I put rich edit box to fill the data.. Now if I am going to write the code for saving the data of different rich edit box into one particular file, it
    only saves the data of last rich edit box.. So plzz suggest that how can I save the contents of different rich edit box together into one rtf file.

    Ok Nishant, just did some quick research, since rtf file is unlike txt file, we cannot simply directly write some content to the rtf.
    You can try to find some third party code that can help you insert text into rtf file or you would like to load the content from rtf out to the richeditbox and merge them to one richeditbox and then save back to the file.
    You could like to see how to read/save rtf file sample from:
    RichEditBox class
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Issue with Sender File FCC

    Hi Experts,
    I have an issue with Sender File FCC Adapter. The file being picked is of type TXT and it is tab seperated. The first line contains the field names and from next line onwards we have values for those fields.
    The field names and field values are tab seperated. Even inserting a single letter in some field value manually disrupts the whole setup & alignment of the TXT file and the Sender File CC is unable to pick up the file from the shared folder. If the first file is errorenous and after that a correct TXT file is posted, it fails to pick up the correct file as it is trying to pick the errorenous file first.
    The Error thrown is :
    "Conversion of file content to XML failed at position 0: java.lang.Exception: ERROR converting document line no. 2 according to structure 'ABCD':java.lang.Exception: ERROR in configuration / structure 'ABCD.': More elements in file csv structure than field names specified!"
    I have two questions:
    1. Is there a way to handle such a scenario? For e.g., the errornous TXT file gets picked but throws error in PI.
    2. Is there an alternative that the sender FCC channel picks up the correct files and filter out the errorneous ones ? ?
    Thanks,
    Arkesh

    Hi Arkesh,
    I think you are passing more number of fields than expected. Please check paramters defined and send the data accordingly.
    In the processing parameters tab of sender file adapter, you have an option called Archive faulty source files, below to that you would have option to enter the " Directory for Archiving files with Errors".
    I hope this helps you....
    Thanks,

  • Recover a single file in Content Services

    I've been asked to document how we would go about recovering a single file in Content Services. The Trash/Archive angle I'm clear on, but what about an extreme case. Lets say that someone created a file, trashed it, emptied the trash, and then it expired from the Archive before the user realized that they needed the file.
    Is it possible, and if so how would one go about getting the file back assuming that there is a full db backup once a day?
    Would enabling BFILE aging help? If so, how?
    On a related note: If archive is set to expire content after 1 month, but BFILE aging is turned off, when does deleted content get purged from the Archive? It's not 1 month, as I have deleted files in the Archive going back to January.

    Archive expiration and BFile aging are orthogonal. BFile pushes the LOBs of documents that are in the archive to bfile. Archive expiration deletes files from the archive after the configured time. If this is not the behaviour you are seeing please file a TAR and they can help check whether your system is configured correctly or if you are running into a bug.
    regards,
    -sancho

  • File-Sender: Verification of file content during file content conversion

    Hello,
    I have a question regarding „file verification“ when using the file-sender adapter.
    I got a flat file (.csv) that I convert into XML with the file-sender adapter and file-content-conversion.
    In the file there is a column “RecTyp” that is my KeyFieldValue. Possible values for RecTyp are “B”, “D”, “U”, “T”.
    So far so good. All this works perfectly.
    If (due to an error) there is another value in RecTyp than the 4 mentioned above (B;D;U;T) the file adapter ignores this row today. And here begins my issue:
    I want to change this behavior. That means in detail, I want to check if there is another value in my KeyFieldValue “RecTyp” than specified. If this is the case an exception should be thrown. The whole message should not be delivered.
    We had the idea to use the module-processor for this verification….
    But may be there are other suggestions. Or may be some one can provide an appropriate module that is already written, etc….
    Does anyone have an idea?
    Kind regards,
    Tobias

    Hi,
    It is better to pick the file in generic way i.e Row by Row and then go with the validation with the help of Adapter Modules or in the Mapping.
    For more-
    /people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter
    Thanks,
    Moorthy

  • Archiving faulty Source file not working in Sender Adapter FCC

    Hi Experts,
    I have enabled "Archiving Faulty Source File" in Sender Adapter FCC and pen down the directory path accordingly.
    Likewise I also enable the processed mode as "archive" and give it the direcotory path.
    However when there is a error flagged in sxmb_moni for this interface, I unable to see any file created in the error folder but I can see a file with timestamp being created in the archive folder.
    I have checked the access right to the directory, so this is not an issue. I ran through the forum on this subject and come across the help.sap note on the following
    " To archive source files where a permanent error occurred during processing, set the indicator.
    A permanent error occurs either during the conversion of the file content, or in a module in the module processor.
    More information: MessageTransformBean, Migrating Dispatcher Classes
    ○       Specify the Directory for Error Archiving.
    ○       To add a time stamp to the archived file, select Add Time Stamp. "
    What is the definition as "permanent error". The error I got in sxmb_moni is a mapping conversion error, so it should be archive this to the error folder, right ?
    Anyone have any such setting enabled and working ?
    Regards
    FNG

    The error I got in sxmb_moni is a mapping conversion error, so it should be archive this to the error folder, right ?
    No, it is not the case. As mentioned on the SAP Help site, for the faulty file to be archived, the error has to occur in content conversion, or in module processor.
    If the error you are getting is in MONI, then it means syntactically the file is correct and hence adapter engine has picked up and sent to integration engine (SXMB_MONI).
    -Supriya.

  • File sender adapter and content conversion with polish character

    We are loading a csv file with PI 7.0 file sender adapter using "content conversion" - all fields go through EXCEPT a special character hex '208C' (space in front) looks like "Æ" is converted to hex 'C28C'.
    We are using code page UTF8
    We are using:
    enclosuresign "
    enclosuresignescape ""
    fieldcontentformatting nothing
    enclosureconversion NO
    Hope some one can help

    Hi Bohamo,
    Hope you have set the following for your file sender adapter :
    1. Transfer Mode is set to Binary,
    2. File Type Text,
    3. Encoding ISO-8859-1( for Western European Latin ).
    Inorder to recognize Polish Character, try as follows :
    Your sender file after coming into Pi has XML encoding declaration 'UTF-8'.
    Write a simple XSLT mapping to change the value of the attribute "encoding" to "ISO-8859-1" in the output XML of message mapping . Include this XSLT map as the second mapping step in your interface mapping.
    First step in your interface mapping will be your already existing message mapping.
    An example of the XSL code :
    <?xml version='1.0'?>
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method='xml' encoding='ISO-8859-1' />
    <xsl:template match="/">
    <xsl:copy-of select="*" />
    </xsl:template>
    </xsl:stylesheet>
    or you can also do java mapping if you are comfortable with java code !
    Cheers,
    Ram.

Maybe you are looking for


HashFlare