Flat file monitoring and parsing

I'm looking for an existing framework/open source project, which is designed to monitor a file to detect when it has changed, or just periodically check the modified date. If it detects a change in the file, then parse the file, looking for specified strings. Basically I want to monitor application log files, parsing out specified strings, then loading this information into a database.
My first thought was looking at some ETL frameworks. I looked at a few, but I couldn't fine any that looked for strings/patterns/regex, they all used fixed width, delimited patterns for flat file processing.
Does anyone know of any such code/framework I could utilize?
Thanks!!

Thanks for the response, I reviewed Apache Camel and the file polling will be useful. I also like the integration with ServiceMix, we've been looking for an open source ESB, we currently are using our "home grown" ESB.
Looks like Camel is using the FlatPack code base for parsing. I reviewed this and also looks like a useful piece of code...something we could definitely use. However it doesn't really support string search / pattern matching parsing.....at least what I read....I'm sure I'm missed something.
Below are examples of how we want to parse a file, and a example property file entry:
�     Exact start point, exact end point, save text (inclusive/exclusive)
file_name_1.parse.1=01,455,exclusive
file_name_1.parse.2=422,500,inclusive
�     Search string start position, save text for x number of positions
file_name_2.parse.1=the first string,499
�     Search string start position, search string end position, save text (inclusive/exclusive)
file_name_2.parse.2=string1,string2,inclusive
Any ideas for this??
Thanks!!!

Similar Messages

  • Flat-File Recon And Provision to DB Apps Table

    Hi,
    I have a query relating to the process for recon with Oracle HRMS and creating/updating users in OIM and then approval to provision to target resource.
    The following is detailed process that I am looking for -
    1. Run Recon to get the users from flat file location.
    2. Create users in OIM and based on information from flat file trigger approval process for creation of the users in target resource
    Please tell me what is the process I need to follow to configure the flat file gtc and also to trigger the approval/provisioning process for target resource.
    Thanks.

    Hi,
    To configure the flat file gtc, just go through Admin console guid there you will find all the required steps for flat file recon. Or you can refer the OIM Labs also.
    Now to trigger the db provisioning with approval, you can crete access policy on ALL User group as all the user get created in that particular group. While you will create access policy there is option to check for Approval Required,just chyeck that option and your approval will start tirgger automatically.
    On the other end you need to create an resourceobject, it resource, provisioning process and approval process for database provisioning.
    Let me know if you have any query for the same.
    Regards
    Alabhya Goel

  • Extract work order data from r/3 system in flat file(csv)and export to BI

    Hi,
    I am new in interface.
    I need to extract data regarding actual cost and quantities of work assigned to Service Providers from SAP system and send it to BI for reporting purposes.
    This interface will extract Master data as well as transactional data. Extraction of Master data will be a full extract and that of transactional data will be Delta Extraction.
    Custom development will extract the data from different fields and will export it to flat files (CSV format). This program will amalgamate all the flat files created into one big file and export it to BI system.
    Export of data to BI system will be done by schedule program Control M, which will export the data from SAP system to BI system in batches. Control M is expected to run daily at night.
    Please provide the step-by-step proces to do this.
    Thanks
    Suchi
    Moderator message: anything else? please work yourself first on your requirement.
    Edited by: Thomas Zloch on Mar 25, 2011 1:21 PM

    Hi Ravi,
    you've got to set up the message type MDMRECEIPT for the Idoc distribution from R/3 to XI. Check chapter 5.2 in the IT configuration guide available on <a href="http://service.sap.com/installmdm">MDM Documentation Center</a>. It describes the necessary steps.
    BR Michael

  • How to Merge Flat file data and DSO Data

    I have one DSO A which has more than 20 objects (5 of them are keys), which include 0comp_code,0gl_account,.  I have a Flat file which has the following fields ,COMPANY CODE, GLACCOUNT and SUPERVISOR ,EXECUTIVE,ANALYST. My requirement is to get a DSO with consolidated data from DSO A and Flat File data. That means for exapme, In my DSO A , I have US01 COMPANY CODE so when I merge the data in DSO C It should diplay the data for the supervisor exe and analyst for DSO A as well
    When I added with 2 DSOs A (ECC ) and B (Flat file)..C is getting the data from B and its appending the data for Flat file at the end...So I want to merge both...DSO A should match  up with DSO B flat file company code and gl account and should display the remaining fields
    I think its clear..Can any one let me know the solution ASAP...
    Note: We are on BI7.0 not 3.5...
    Thanks in advance

    I have the following key fields in DSO A :
    0COMP_CODE,OGL_ACCOUNT, 0CHRT_ACCT,0FISCVARNT,0FISCPER,0AC_DOC_NUMBER,0ITEM_NUM.
    Here are the total fields in Flat File:
    COMPANY CODE,GL ACCOUNT,SUPERVISOR,ANALYST,EXECUTIVE.
    Here am taking 0COMP_CODE  and 0GL_ACCOUNT  as key fileds in DSO B for Flat File...
    For every month there will be 5 to 6000 records for Flat file ...
    Thanks

  • Multiple flat file in and multiple target tables

    Hi,
    How can we have multiple flat file into multiple targets.
    I am trying to load data from multiple flat files into respective tables. But it gives error like
    VLD-2411: Cannot handle two file structures
    Make sure that only one file structure is used in a SQL*Loader map
    Can anyone help.
    Regards
    Rakesh Kumar

    I donot thing in one mapping you can take multiple sqlloader file.
    If want to load data form multiple file use External table.

  • XML file input and parsing

    In looking through the documentation for XML handling, I can
    find no command for something as basic as how to read in a local
    XML file for parsing and processing.
    Can someone recommend a site with example code for
    manipulating XML files?
    I've done some of this before in Java and Perl. Does
    ActionScript have similar facilities?

    hsfrey,
    I often put XML files onto the web server and then access
    from Flex code. Something like this:
    protected function load_settings_data():void
    try
    data_URL = "./data_settings.xml";
    data_request = new URLRequest(settings_data_URL);
    data_loader = new URLLoader(settings_data_request);
    data_loader.addEventListener("complete",
    settings_data_loaded);
    data_loader.addEventListener("ioError",
    settings_data_error);
    } // try
    catch (error:Error)
    Alert.show("load_settings_data - error message " +
    error.toString());
    } // catch
    } // load_settings_data
    protected function settings_data_error(e:IOErrorEvent):void
    Alert.show("ioError handled in settings_data_error: error "
    + e.text);
    } // settings_data_error
    protected function settings_data_loaded(event:Event):void
    xml_to_use = XML(data_loader.data);
    // do whatever you want with xml_to_use
    } // settings_data_loaded
    However, while you are developing on the local machine you
    will probably want to have some dummy data in a variable:
    [Bindable]
    protected var data_internal:XML =
    <root>
    <stuff>abc</stuff>
    </root>
    and swap this data in in the catch part of the first
    statement. Obviously the dummy data has to have the right structure
    but you can usually get away with just a few entries rather than
    the whole lot. Once you are happy, forget the local variable, put
    back the error handling and just use the file on your web server
    which can be updated whenever you want independently of the swf
    application.
    Hope that helps,
    Richard

  • Loading from a Flat file: Binary and Text File

    Hi,
    Does anyone know what the difference is between loading from a binary file or text file?
    Ramon

    Hi,
    the difference is that text files contain lines (or records) of text and each of these has an end-of-line marker automatically appended to the end of it whenever you indicate that you have reached the end of a line.
    So what happens when we read from a text file is that the end-of-line character for the operating system that we are using gets converted into a low value end-of-string indicator and when we write to a file the appropriate end-of-line character(s) get written when we indicate the end of the line. This makes the reading and writing of text files much easier because the appropriate end-of-line markers are handled for us.
    With a binary file none of these conversions take place. When we read a binary file the end-of-line characters for our operating system will be read into the string and treated no different than any other character. When we write to a binary file the only end-of-line markers that are written will be those that we code into the output ourselves and hence will be exactly as we code it regardless of the operating system that we are running on. This makes it much easier for us when the file does not contain straight text and the end-of-line marker does not separate lines of text but rather appears as part of the non-text data content of the file.

  • Mapping with both source and destination as flat files???

    hi I have two two flat files(large data) for example A and B.
    let us say
    A has records of format( characteres of size(5) , numbers of size(6) , characteres of size(5) )
    B has records of format( characteres of size(5) , numbers of size(6) )
    i have to map these flat files so that rocords with numbers in both files should be added where the records of characters in both files are same) and output a flat file C .
    i.e
    A(aaaaa111111bbbbb222222ccccc111111
    bbbbb111111fffff666666ddddd333333)
    B (aaaaa222222)
    output should be(aaaaa333333)
    i have created the flat file module and could able to sample A and B .
    I have also created an external table based on A and B.but the data is not been showed ih the external table.How to map this.
    Pls guide me.
    srry for being long here.
    Thanks 4 ur time.

    Sounds like your datatypes/settings are incorrect.
    To process a file (let's call it stuff.txt) with fixed length records such as the following...
    aaaaa111111bbbbb222222ccccc111111bbbbb111111fffff666666ddddd
    Here is an example tcl script. There are some variables you have to setup for the flat file module, the oracle module, the File location and project name all of which should exist before running. It will create the flat file, external table, a simple mapping from external table to flat file defined by tcl variable target_file (in same directory as the LOC_SRC_FILES, you can change this..its just for demo purposes and will write a comma separated file). Hopefully this will get you up and going with your problem...
    # Create the modules etc and set the values below, then run
    set project MY_PROJECT
    set ff_module FF
    set ff_location LOC_SRC_FILES
    set ora_module MM
    set target_file my_target_file
    OMBCC '/$OMB_CURRENT_PROJECT'
    OMBDCC
    OMBCC '$ff_module'
    OMBCREATE FLAT_FILE 'FSTUFF' SET PROPERTIES (DATA_FILE_NAME,IS_DELIMITED, RECORD_LENGTH) VALUES ('stuff.txt',0, '16') ADD RECORD 'FSTUFF'
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDA' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('CHAR',1,5,5)
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDB' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('DECIMAL EXTERNAL',6,11,6)
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDC' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('CHAR',12,16,5)
    OMBCC '../$ora_module'
    OMBCREATE EXTERNAL_TABLE 'FSTUFF_EXT' SET PROPERTIES(LOAD_NULLS_WHEN_MISSING_VALUES,TRIM) VALUES (1, 'RIGHT') SET REFERENCE RECORD 'FSTUFF' OF FLAT_FILE '../$ff_module/FSTUFF' DEFAULT_LOCATION '$ff_location'
    OMBCREATE MAPPING 'FILE_TO_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD EXTERNAL_TABLE OPERATOR 'SOURCE_STUFF' BOUND TO EXTERNAL_TABLE 'FSTUFF_EXT'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD FLAT_FILE OPERATOR 'TARGET_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD CONNECTION FROM GROUP 'OUTGRP1' OF OPERATOR 'SOURCE_STUFF' TO GROUP 'INOUTGRP1' OF OPERATOR 'TARGET_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' SET PROPERTIES (GENERATION_LANGUAGE) VALUES ('PLSQL')
    OMBALTER MAPPING 'FILE_TO_FILE' MODIFY OPERATOR 'TARGET_FILE' SET PROPERTIES (TARGET_DATA_FILE_NAME) VALUES ('$target_file')
    OMBALTER MAPPING 'FILE_TO_FILE' MODIFY OPERATOR 'TARGET_FILE' SET PROPERTIES (TARGET_DATA_FILE_LOCATION) VALUES ('$ff_location')
    You can do all this in the UI, just thought it would be useful as a script for you.
    Cheers
    David

  • Flat Files with BAM(Business Activity Monitoring)

    I am using Flat File in my receive location and i am getting XML File in my send location.
    Now I want to check the StartPortTime, EndPortTime with the help of BAM.
    But I am unable to get these values in BAM portal. Is it possible to track flat files(txt files)through BAM.
    Please provide me any solution as soon as possible.
    Thanks in advance !

    Hi Prakash,
    In related to your other forum post on similar question:
    http://social.msdn.microsoft.com/Forums/en-US/a5fbff38-bff9-412a-bf02-9c0a816e52e2/flat-files-with-bam?forum=biztalkgeneral#4c53c118-fda3-44b7-8c42-5f65e034a813
    There is no difference to flat-file schema and standard schema in terms of BAM tracking.
    I understand that you were not able to see tracked data for flat-file schema in “BAM portal”. Can you try to see the data directly in BAM SQL-tables/view rather
    than through BAM portal? Try to query the SQL-view - “bam_YourActivityName_AllInstances” (replace YourActivityName with your activity name)
     order it by LastModified column. See whether your data from your flat-file has been tracked in the SQL-tables.
    If you still not able to view the data in the above SQL-view:
    Check event-log: check the eventlog for any warning. Generally BAM errors
    are logged as warning.
    Check
    TDDS_FailedTrackingData:
    Check
    TDDS_FailedTrackingData table. The TDDS_FailedTrackingData table gets populated whenever there is a tracking failure
    TDDS_FailedTrackingData.
    Enable TDDS: I hope you have enabled
    BizTalk Host/Host Instance configured to run
    “Tracking Data Decode Service (TDDS)" in your environment, because you can see data from other Receive Locations.
    Check deployment & restart host instance/IIS:
    Then only thing I would doubt is about your deployment of Tracking Profile of the flat-file schema. Ensure the Tracking Profile for the flat-file schema has been deployed properly.
    Also ensure you have restarted your host instance or IIS (if it’s under isolated host) for your deployment to take effect.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • Good way to initialize clusters and parse files into them

    I'm writing a small application to open some sonar .xtf files and look at their data.  The format is published and I have done a crude job of building a cluster and initializing it...then reading the file header and parsing it into the cluster so I can use the data in the headers to read the data packets that are stored next.  Here is an example of how I have done one of the parsing jobs...very crudely,
    Read the header binary in (length defined)
    Build the cluster by creating variables from the input file...and then loading the file data in the appropriate cluster variables after casting and ordering the bytes properly...
    Surely this is not the best way to do this...perhaps loops controlled by the number of bytes in parameter to be parsed...???
    I'm looking for a smart strategy for looking at the definition document, which defines the name of the variable, where the variable appears in the header (byte offset), how many bytes it is made up of....and translating that information into the cluster definition efficiently...
    Ideas would be appreciated....
    Thanks in advance for your help.
    Attached are a couple of examples of parsing part of the header information.
    Hummer1
    Attachments:
    Channel Header Info Parse.vi ‏36 KB
    ClusterHeaderInfo.vi ‏46 KB

    There may be a way to do this using "Flatten to String" and "Unflatten from String".
    One issue is that it looks like your strings are fixed length, while LabVIEW uses variable length strings where the string length is preprended to the string bytes when the data is flattened.   But I think if you yourself insert the bytes 16 and 53 into the string data you read from the file at the appropriate places, you can get LabVIEW to read the data and convert it to your cluster.
    Read the data as a string.
    Insert the bytes listed above at the correct places.
    Pass that through the Unflatten from string to convert that string data to a cluster.  You define the cluster by feeding a constant that defines the datatype into the function.
    I think if your binary files as a not too complicated structure, and you define your data types for each element of the cluster precisely, you can get this to work.
    Play with the attached snippet.
    Attachments:
    Channel%20Header%20Info%20Parse[1].png ‏66 KB

  • Generating a flat file

    Hi,
    I have data in my oracle DB and I generated the data using procedure, using consume adapter service, oracleDBbinding.
    The schema has been formed in my Visual Studio project which it will be my source schema. Now I need to generate a text file with a flat file format!! How do I accomplish that?
    I need that text file with a flat file format in order to create my destination schema using the flat file schema wizard!

    helpful!
    But I still can't figure out how to generate oracle data in a flat file format and insert these
    data to my destination schema using flat file wizard.
    What I'm trying to explain is that I don't have a sample file of data which I need to create based on the data that is available in Oracle DB.
    Example of data:
    PO1999-10-20
    US Alice Smith 123 Maple Street Mill Valley CA 90952
    US Robert Smith 8 Oak Avenue Old Town PA 95819
    ITEMS,ITEM872-AA|Lawnmower|1|148.95|Confirm this is electric,ITEM926-AA|Baby Monitor|1|39.98|Confirm this is electric

  • Converting Idoc flat file representation to XML

    Hi ,
    I went through the guide for How To Convert Between IDoc and XML in XI 3.0. I'm concerned with the second part of the guide which says convert from falt file representation of Idoc to XML. Can anyone tell me what are the other design and configuration objects to be created for this scenario ( message types,interfaces, mapping , etc )
    Also which step of the pipeline does the converted XML goes to ?
    The program also expects a filename, what if I want to pass the file name dynamically ? Any ideas on this one.
    Hope someone replies this time.........:)
    Thanks for you help and improving my knowledge
    Thanks
    Advait Gode.

    Hi Advait,
    Let me give you a small overview on how inbound IDOCs work before answering your question-
    The control record is the key in identifying the routing of the IDOC. If you try to think IDOCs as normal mails(post), the control record is the envolope. It contains information like who the sender is and who the receiver should be and what the envelope contains (no different than receiving mails/letters by post).
    Then the data records contain the actual data, in our example would be the actual letter. The status records contain the tracking information.
    Traditionally SAP's IDOC interface (even before XI comes in picture) has utility programs to post incoming IDOCs in to SAP. One such program is RSEINB00 which basically takes  the IDOC file name and the port as input. This program opens the file and posts the contents to the SAP IDOC interface (which is a set of function modules) via the port. The idea is to read the control record and determine the routing and further posting to application. Note that one information in the control record is the message type/idoc type which decides how the data records need to be parsed.
    Now in XI scenario, what happens if we receive data as flat file? Normally, we use flat file adapter and in the file adapter we provide information on how to parse the file. But, if the incoming file is flat and in IDOC structure, why do we have to configure the file adapter, when the parsing capability is already available using RSEINB00/Standard IDOC interface.
    This the reason, the guide suggests you to use RSEINB00. Now, your concern is what if you need to provide a dynamic filename. My idea is to write a wrapper program. This would be an ABAP program in your integration engine. This program will determine the file name (based on a logic which should be known to you) and then call program RSEINB00 using a SUBMIT/RETURN. You would then schedule this ABAP program in background to run in fixed schedules.
    There are other ways of handling your scenario as well but from limited information from your request, I will stop with this now. Post me if you have any more queries.
    KK

  • I can't build an xsd for a flat file (txt) to handle repeating records

    Hi - have looked at many posts around flat file schema and they don't seem to address my question.
    I have a flat file that is \n delimited
    the pattern of the data is simple:
    record1 - 90 characters
    record2 - 20 characters
    record3 - n 248 characters - each of these records is parsed into children by the positional method
    record n+1 10 characters
    record n+2 20 characters
    so I used the flat file schema generator to generate the schema and built a map mapping the flat file schema to another xml schema. The schema looks ok - record1, record2, record n+1, record n+2 are child elements of the root. the repeating record
    section is showing up as a node with the parsed children.
    The transform is only mapping the children of the repeating records. When I test the map only the first repeating record gets parsed. No repeating happens (the actual flat file has 400+ repeating records). When I run the map in debug mode, the input
    xml shows that record1 is read in correctly, record2 is read in correctly, record3 is read in and parsed and record4 is treated like record n+1 and record5 is treated like record n+2 and the map thinks it's all finished.
    the section of the repeat part of the schema is and you can see that I set the minOccurs=1 and maxOccurs=unbounded for the node (INVOICE) and the complexType but this is not an affective syntax. I have looked at how the EDI X12 schema look and how they handle
    looping and it is a lot different than what the Flat File schema wizard is doing. Is there a good set of rules published that would guide me though this? otherwise I will basically have to read in the lines from the file and parse them out with functoids -
    seems so inelegant. Thanks in advance.
    <xs:element minOccurs="1" maxOccurs="unbounded" name="INVOICE">
              <xs:annotation>
                <xs:appinfo>
                  <b:recordInfo structure="positional" sequence_number="3" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false"
    />
                </xs:appinfo>
              </xs:annotation>
              <xs:complexType>
                <xs:sequence minOccurs="1" maxOccurs="unbounded">
                  <xs:annotation>
                    <xs:appinfo>
                      <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003"
    />
                    </xs:appinfo>
                  </xs:annotation>
                  <xs:element name="SegmentType" type="xs:string">
                    <xs:annotation>
                      <xs:appinfo>
                        <b:fieldInfo justification="left" pos_offset="0" pos_length="2" sequence_number="1" />
                      </xs:appinfo>
                    </xs:annotation>
                  </xs:element>....... more children elements
    Harold Rosenkrans

    Thanks for responding
    I gave up trying to parse the repeating record into fields. Instead I just loop through the repeating record section with an <xs:for-each> block in the xsl and use functoids to grab the fields.
    So that works for having the two, shorter header records (structure is positional) before the section of repeating records. Now I just have to figure out how to get the schema to handle the two, shorter trailer (or footer, whatever you prefer) records after
    the section of repeating records
    the error I get in VS when I test the map is [BTW I changed the element names in the schema which is why you don't see INVOICE in the error]
    When I declare the last element as being positional with a character length of 10 I get the error:
    Error 18 Native Parsing Error: Unexpected end of stream while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 1359. The line number where the error occured is 9. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records (5 in the file) are 248 char and the last record is 10 char
    so an offset of 1359 puts it beyond the last record by 16 characters - so the stream reader is looking for the next repeating record.
    if I try to declare the last element as delimited I get the error:
    Error 14 Native Parsing Error: Unexpected data found while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 597. The line number where the error occured is 5. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records are 248 char.
    a stream offset of 597 puts me 8 characters into the third repeating record - at this point I have only declared one trailer record in the  schema, 10 characters long.
    Why is stream reader stopping at such a weird spot?
    The bottom line is I still haven't discovered the correct schema to handle the trailer records. even if I set the maxOccurs="4" (for the repeat record declaration) it still gets the first error. How does it find an unexpected end of stream looking
    for \r\n when the maxOccurs for the repeat record declaration should have the stream pointer in the 5th repeat record.
    I unfortunately don't have any options concerning the file structure.
    I have read a lot of posts concerning the trailer issue. I have seen a couple that looked interesting. I guess I'll just have to give them a try. The other option is to create a custom pipeline that will only take file lines of 248 characters.
    That's just disgusting !
    Harold Rosenkrans

  • Error while trying to process multiple Recordsets in Flat file.

    Hi All,
    I am working on Flat File to Flat File scenario and my structure is as follows.
    Recordset
         Record1
              Field1
              Field2
         Record2
              Field3
              Field4
         |
         Record9
              Field5
              Field6
         I am going to receiver multiple Recordsets from my input and need to pass them as output flat file after doing some manipulations in mapping(I am using Java mapping).
         In Moni I am able to see multiple Recordset XMLs created but the message is failing in receiver communication channel with error
    "Failed to process message content. Reason: Exception in XML Parser (format problem?):'java.lang.Exception: Message processing failed in XML parser: 'Conversion configuration error: Unknown structure '' found in document', probably configuration error in file adapter (XML parser error)' (Software version: 3.0.1)"
         When I am trying to pass Single Recordset I am able to see the output, but when I am trying with multiple Recordsets it is throwing error.
         Can anybody help me in finding the root cause to this problem.
         My Receiver channel Content conversion is as follows.
         RecordsetStructure: Record1,Record2, -- - - ,Record9
              Record1.fieldFixedLengths     
              Record1.fieldNames
              Record1.endSeparator so on till Record9
    Regards,
    Jayaram.G

    You might want to check the following things
    Are u specifying field names,separators for Record1,Record2..Record9.
    Is you occurence repeats after record1..record9 again?
    Change your  structure occurence as per the runtime data you provide..
    See whether your java mapping modifies the structure that does not match with fcc configuration. You might want to pay attention over there too.

  • Flat file via JMS - how to (most easy)?

    Hi experts
    My scenaio is R/3 -> XI (technology not decided) -> legacy system (via flat file structure and JMS)
    I would like to find the best way to do this. The receiving system only takes a flat file with 150 char length in each line.
    Until now i have tried to do this using test message to XI and then do a pseudo mapping to a XML structure. This XML is then parsed to a flat file in the JMS adtapter using module localejbs/AF_Modules/MessageTransformBean. This "parsing" is a rather time consuming solution and im not sure it is the best solution.
    I just saw a how to guide explaning how to make a flat file from a IDOC-XML using ABAP mapping. I could use this by just making sure my IDOC segments was 150 char length. The downside of this is that XI is reduced to a IDOX-XML transformer and hides no complexity for the R/3 system.
    What do you think is the rigth way to go? Is there an even better solution?
    Kind regards
    Martin

    The mention HOW TO guide is:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/46759682-0401-0010-1791-bd1972bc0b8a
    And I sould mention I would like to also receive data from the legacy system (wich also sends files with 150 char lines)

Maybe you are looking for