Flat File Parsing

The flat files are present in a unix server .
No other application can be loaded there
i have a lunix machine from where i have to access unix server data
which is the better way
1)directly reading the file from the unix server
2)loading the file in lunix and then reading
How this can be done in java ????
Now lets say we are reading the file in some way or the other ,How to do i parse it
File parsing
different file:
1)fixed format
file size 50 mb
eg
a)the data will be in fixed column size
0-60 -name
61-70 -department
71-80 -age
b)there will be section strip data for a particular section
Note :-Here i have to parse a particular section from the flat file the size of file is 50mb?
eg
section-Finance
name               department          age     
Smith               dep1               32
john               dep2               40
turner               dep3               56     
section-marketing
name               department          age
antony               dep1               60     
black               dep2               57
2)csv file
file size 15 kb
here the data size will be not be fixed but will have a seperator
name|department|age     
Smith|dep1|32
john|dep2|40
turner|dep3|56     
antony|dep1|60     
black|dep2|57
here seperator is | it can be anything
Thanx in advance
Meghna

Xml Convert:
http://industry.java.sun.com/solutions/products/by_product/0,2348,all-4398-99,00.html
Flat File to Xml Converion:
http://www.infoloom.com/gcaconfs/WEB/philadelphia99/lyons.HTM

Similar Messages

  • Header/Detail Flat file parsing

    Hi,
    How to define a native flat file schema for a file with following kind of layout i.e., Master record followed by Multiple detail record. I need to have a correlation between the Master and its Detail records.
    MASTER_A1,MASTER_A2,MASTER_A3
    DETAIL_A11,DETAIL_A12,DETAIL_A13
    DETAIL_A21,DETAIL_A22,DETAIL_A23
    MASTER_B1,MASTER_B2,MASTER_B3
    DETAIL_B11,DETAIL_B12,DETAIL_B13
    DETAIL_B21,DETAIL_B22,DETAIL_B23
    DETAIL_B31,DETAIL_B32,DETAIL_B33
    DETAIL_B41,DETAIL_B42,DETAIL_B43
    MASTER_C1,MASTER_C2,MASTER_C3
    DETAIL_C11,DETAIL_C12,DETAIL_C13
    DETAIL_C21,DETAIL_C22,DETAIL_C23
    DETAIL_C31,DETAIL_C32,DETAIL_C13

    http://blogs.oracle.com/reynolds/2009/05/mastering_details_with_flat_fi.html
    Regards,
    Anuj

  • Parse multiple files in one flat file?

    Hi all,
    I'm currently working with flat file with  this kind of structure:
    "849000","1","2","3","4"             <- begin of file
    "849HD","","1939","12"              <- header level
    "849D1","39193","313","1"         <- detail level
    "849D2","","description","48,13" <- detail description level
    "849RT","133,1","N4","203"        <- totals level
    The problem is that the file i have to pick up (the map is File => EDI)
    can contain many structures (every estructure is an edi to generate)
    example:
    "849000","1","2","3","4"             <- first file
    "849HD","","1939","12"             
    "849D1","39193","313","1"         
    "849D2","","description","48,13" 
    "849RT","133,1","N4","203"         <- end of first file
    "849000","2","","","6"              <- second file
    "849HD","","92","23"              
    "849D1","99","912","1"         
    "849D2","","second description","3,11" 
    "849RT","61","2","UP","102"         <- end of second file
    - How can i parse this file in order to get a nested structure?
    MT_file
    MT_file/row (0.unbounded) <- that would contain 2 files
    MT_file/row/849000
    MT_file/row/849HD
    MT_file/row/849D1
    MT_file/row/849D2
    MT_file/row/849RT
    Because i believe content conversion (KeyFieldValue) is not effective since it will take the key as generate the segments all together without respecting the order
    Any ideas?
    Thanks!

    I'm not so sure about that, i mean i think that the KeyField won't we able to understand the hiercachy
    example:
    "849000","1","2","3","4" <- first file
    "849HD","","1939","12"
    "849D1","39193","313","1"
    "849D2","","description","48,13"
    "849RT","133,1","N4","203" <- end of first file
    "849000","2","","","6" <- second file
    "849HD","","92","23"
    "849D1","99","912","1"
    "849D2","","second description","3,11"
    "849RT","61","2","UP","102" <- end of second file
    with keys:
    849000
    849HD
    849D1
    849D2
    849RT
    i will probably get all in a group, and in doing that i'll loose the reference for the first and second file
    resulting:
    "849000","1","2","3","4" <- first file
    "849000","2","","","6" <- second file
    "849HD","","1939","12"
    "849HD","","92","23"
    "849D1","39193","313","1"
    "849D1","99","912","1"
    "849D2","","description","48,13"
    "849D2","","second description","3,11"
    "849RT","133,1","N4","203" <- end of first file
    "849RT","61","2","UP","102" <- end of second file
    Or am i missing something?
    This is file => EDI, so the channel would be SENDER

  • Flat file monitoring and parsing

    I'm looking for an existing framework/open source project, which is designed to monitor a file to detect when it has changed, or just periodically check the modified date. If it detects a change in the file, then parse the file, looking for specified strings. Basically I want to monitor application log files, parsing out specified strings, then loading this information into a database.
    My first thought was looking at some ETL frameworks. I looked at a few, but I couldn't fine any that looked for strings/patterns/regex, they all used fixed width, delimited patterns for flat file processing.
    Does anyone know of any such code/framework I could utilize?
    Thanks!!

    Thanks for the response, I reviewed Apache Camel and the file polling will be useful. I also like the integration with ServiceMix, we've been looking for an open source ESB, we currently are using our "home grown" ESB.
    Looks like Camel is using the FlatPack code base for parsing. I reviewed this and also looks like a useful piece of code...something we could definitely use. However it doesn't really support string search / pattern matching parsing.....at least what I read....I'm sure I'm missed something.
    Below are examples of how we want to parse a file, and a example property file entry:
    �     Exact start point, exact end point, save text (inclusive/exclusive)
    file_name_1.parse.1=01,455,exclusive
    file_name_1.parse.2=422,500,inclusive
    �     Search string start position, save text for x number of positions
    file_name_2.parse.1=the first string,499
    �     Search string start position, search string end position, save text (inclusive/exclusive)
    file_name_2.parse.2=string1,string2,inclusive
    Any ideas for this??
    Thanks!!!

  • How to parse a flat file with C#

    I need to parse a flat file with data that looks like
    01,1235,555
    02,2135,558
    16,156,15614
    16,000,000
    You get the idea. Anyway, I'd like to just used a derived column and move on except I need to put a line number on each row as it comes by so the end looks like,
    1,01,1235,555
    2,02,2135,558
    3,16,156,15614
    4,16,000,000
    I'm trying to do with a script transformation but I can't seem to get the hang of the syntax. I've tried looking at various examples but everybody seems to prefer VB and I'd like to keep all of my packages C#. I've set up my input and my output columns I just
    need to figure out how to write the code that says something like:
    row_number = 1
    line_number = row_number
    record_type = input.split.get the second data element
    data_point_1 = input.split.get the third data element
    row_number = row_number ++

    /* Microsoft SQL Server Integration Services Script Component
    * Write scripts using Microsoft Visual C# 2008.
    * ScriptMain is the entry point class of the script.*/
    using System;
    using System.Data;
    using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
    using Microsoft.SqlServer.Dts.Runtime.Wrapper;
    [Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
    public class ScriptMain : UserComponent
    private int rowCounter = 0;
    // Method that will be started before the rows start to pass
    public override void PreExecute()
    base.PreExecute();
    // Lock variable for read
    VariableDispenser variableDispenser = (VariableDispenser)this.VariableDispenser;
    variableDispenser.LockForRead("User::MaxID");
    IDTSVariables100 vars;
    variableDispenser.GetVariables(out vars);
    // Fill the internal variable with the value of the SSIS variable
    rowCounter = (int)vars["User::MaxID"].Value;
    // Unlock variable
    vars.Unlock();
    // Method that will be started for each record in you dataflow
    public override void Input0_ProcessInputRow(Input0Buffer Row)
    // Seed counter
    rowCounter++;
    // Fill the new column
    Row.MaxID = rowCounter;
    Here is a script to get an incremental ID. On the ReadWriteVariables of the script add the "User::MaxID" variables to get the last number. On the Inputs and Outputs tab, create an output column  here in the code it's MaxID numeric data types.

  • How to load a flat file into BW-BPS using Web Browser

    Hello, i have a problem with the "How to do Paper". I want to upload a Excel CSV file , but the paper only describes a txt file Uplaod. Does anybody can help me ?Thanks !

    You need to parse the line coming in from the flat file...
    You can do this with generic types in your flat file structure (string). 
    Then you loop through the table of strings that is your flat file and parse the string so that it breaks up the line for each comma.  There is an ABAP command called: SPLIT - syntax is as follows:
    SPLIT dobj AT sep INTO
          { {result1 result2 ...} | {TABLE result_tab} }
          [IN {BYTE|CHARACTER} MODE].
    Regards,
    Zane

  • Reading Data from a flat file in UCCX

    We have UCCX 7.0 and I need to do a lookup on a flat file or cvs file from a script.
    Can that be done or does it have to be from a database?

    Anything's possible if you want to write a custom Java class that parses the file. Out of the box you need to use XML and XPath or an ODBC connection though.

  • Storing Persistent Data In A Flat File -- Design Ideas?

    I have an application that needs to store a small amount of persistent data. I want to store it in a flat config file, with categories and key-value pairs. The flat file might look something like this:
    John:
    hair=green
    weight=170
    Sally:
    eyes=blue
    weight=110
    and so on. My application will initialize a custom class with the data stored in the file, and then work with that class. When updates are made to the data as the application runs, the file will need to be changed too (so that changes will be reflected even if the program crashes, eg).
    What is the best way to implement this? Does Java have any built in classes that allow for something like this? I was thinking about Serializable (which I've never used), but I want the file to be human readable and editable. How about using RandomAccessFile? I'm guessing there is a better way....
    Thanks for any advice,
    John

    I'd use a XML structure; classes for XML storing/parsing are part of the API, the structure of XML is flexible enough and human-readable.

  • Uploading Data from a Flat File

    Hi
    I am trying to Upload data from a Flat File to the MDS. I have a few questions on the Process.
    a) XI would be the Interface, and one end would be a file adapter with the Flat File format. On the other end, which Interface should I use - ABA Business Partner In or MDM Business Partner In. I do not understand the differences between them and where should which one be used?
    B) At the moment,I want to map the data to standard Object type, Business Partner  BUS1006. This also has a staging area already defined in the System. Can I view the data which is imported into the Staging area ? How so ?
    C) The struture (or data) that I wish to upload has few fields, and does not map to the BP structure easily. In such a scenario does it make sense to
    Create a new Object type
    (ii) Create a new Business Partner type with the appropriate fields only ..
    What I need to know is if either of these options is feasible and what are the pros & cons of doing this, in terms of effort, skillset & interaction with SAP Development ?
    Kindly do reply if you have any answers to these questions.
    Regards,
    Gaurav

    Hi Markus,
    Thanks for your inputs.
    I have tried uploading some dats from a Flat file to the Business Partner Onject type, BUS1006. I initially got some ABAP Parsing errors on the MDM side, but after correcting that, I find my message triggers a short dump - with the Method SET_OBJECTKEYS, not finding any keys in the BP structure that has been created.
    Q1 - How do I get around this problem ? Is it necessary for me to specify Keys in the PartyID node of the Interface ABABusinessPartnerIn. or is it something else ?
    Q2 - How is this general process supposed to work? I would assume that for staging, I would get incomplete Master data or data from flat files, which need not neccesarily contain keys. The aim is to use the matching strategies in the CI to identify duplicates and consolidate them.
    Thanks in advance for your reply.
    Regards,
    Gaurav

  • Error while validating flat file schema

    Hi,
    I have the flat file schema which is character delimited on '|'. It has Details and Tralier. Trailer starts with a "Tag Identifier" as 'T' and can have one value as "Total Records" i.e. T|3. Details is also delimited on the base of '|' and but has no "Tag Identifier".
    Details have min occurs as 0.
    Schema:
    Eg of the valid values:
    21788367|EN
    21848269|EN
    22004225|EN
    T|3
    This should also be the valid value (with no details and only trailer):
    T|0
    But, when I'm trying to validate this. I'm getting error:
    Unexpected data found while looking for:
    '\r\n'
    The current definition being parsed is User_details. The stream offset where the error occurred is 2. The line number where the error occurred is 1. The column where the error occurred is 2.
    I think it is trying to find Details record also. What setting I need to change?
    Please help.
    Kunal G

    This is my schema:
    <?xml version="1.0" encoding="utf-16"?>
    <xs:schema xmlns="http://BTS_POC.Users" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://BTS_POC.Users" xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:annotation>
    <xs:appinfo>
    <schemaEditorExtension:schemaInfo namespaceAlias="b" extensionClass="Microsoft.BizTalk.FlatFileExtension.FlatFileExtension" standardName="Flat File" xmlns:schemaEditorExtension="http://schemas.microsoft.com/BizTalk/2003/SchemaEditorExtensions" />
    <b:schemaInfo standard="Flat File" codepage="65001" default_pad_char=" " pad_char_type="char" count_positions_by_byte="false" parser_optimization="speed" lookahead_depth="3" suppress_empty_nodes="false" generate_empty_nodes="true" allow_early_termination="false" early_terminate_optional_fields="false" allow_message_breakup_of_infix_root="false" compile_parse_tables="false" root_reference="Users_Root" />
    </xs:appinfo>
    </xs:annotation>
    <xs:element name="Users_Root">
    <xs:annotation>
    <xs:appinfo>
    <b:recordInfo structure="delimited" child_delimiter_type="hex" child_delimiter="0xD 0xA" child_order="infix" sequence_number="1" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false" />
    </xs:appinfo>
    </xs:annotation>
    <xs:complexType>
    <xs:sequence minOccurs="0">
    <xs:annotation>
    <xs:appinfo>
    <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003" />
    </xs:appinfo>
    </xs:annotation>
    <xs:element minOccurs="0" maxOccurs="unbounded" name="Users_Detail">
    <xs:annotation>
    <xs:appinfo>
    <b:recordInfo structure="delimited" child_delimiter_type="char" child_delimiter="|" child_order="infix" sequence_number="1" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false" />
    </xs:appinfo>
    </xs:annotation>
    <xs:complexType>
    <xs:sequence>
    <xs:annotation>
    <xs:appinfo>
    <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003" />
    </xs:appinfo>
    </xs:annotation>
    <xs:element name="Name" type="xs:string">
    <xs:annotation>
    <xs:appinfo>
    <b:fieldInfo justification="left" sequence_number="1" />
    </xs:appinfo>
    </xs:annotation>
    </xs:element>
    <xs:element name="Id" type="xs:string">
    <xs:annotation>
    <xs:appinfo>
    <b:fieldInfo justification="left" sequence_number="2" />
    </xs:appinfo>
    </xs:annotation>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    <xs:element name="Users_Trailer">
    <xs:annotation>
    <xs:appinfo>
    <b:recordInfo tag_name="T" structure="delimited" child_delimiter_type="char" child_delimiter="|" child_order="prefix" sequence_number="2" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false" />
    </xs:appinfo>
    </xs:annotation>
    <xs:complexType>
    <xs:sequence>
    <xs:annotation>
    <xs:appinfo>
    <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003" />
    </xs:appinfo>
    </xs:annotation>
    <xs:element name="TotalNumber" type="xs:string">
    <xs:annotation>
    <xs:appinfo>
    <b:fieldInfo justification="left" sequence_number="1" />
    </xs:appinfo>
    </xs:annotation>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>
    Kunal G

  • Segment_Unknown error encountered while running flat file recon

    When we tried to run 'SAP HRMS User Recon' schedule task by using a flat file generated from SAP HR system, we are facing an error 'com.sap.conn.jco.AbapException: (126) SEGMENT_UNKNOWN: SEGMENT_UNKNOWN'.
    The complete stack trace is as below :
    [2013-08-21T05:34:16.480+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.common.parser.HRMDAParser : getSchema() : SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================[[
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ================= Start Stack Trace =======================
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.common.parser.HRMDAParser : getSchema()
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] Description : SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] com.sap.conn.jco.AbapException: (126) SEGMENT_UNKNOWN: SEGMENT_UNKNOWN Message 257 of class EA type E, Par[1]: Z1P9200, Par[2]: 731[[
    at com.sap.conn.jco.rt.MiddlewareJavaRfc$JavaRfcClient.execute(MiddlewareJavaRfc.java:1807)
    at com.sap.conn.jco.rt.ClientConnection.execute(ClientConnection.java:1120)
    at com.sap.conn.jco.rt.ClientConnection.execute(ClientConnection.java:953)
    at com.sap.conn.jco.rt.RfcDestination.execute(RfcDestination.java:1191)
    at com.sap.conn.jco.rt.RfcDestination.execute(RfcDestination.java:1162)
    at com.sap.conn.jco.rt.AbapFunction.execute(AbapFunction.java:302)
    at oracle.iam.connectors.sap.common.parser.HRMDAParser.getSchema(Unknown Source)
    at oracle.iam.connectors.sap.hrms.tasks.SAPHRMSUserRecon.execute(Unknown Source)
    at com.thortech.xl.scheduler.tasks.SchedulerBaseTask.execute(SchedulerBaseTask.java:384)
    at oracle.iam.scheduler.vo.TaskSupport.executeJob(TaskSupport.java:145)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    at java.lang.reflect.Method.invoke(Method.java:611)
    at oracle.iam.scheduler.impl.quartz.QuartzJob.execute(QuartzJob.java:196)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ================= End Stack Trace =======================
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.hrms.tasks.SAPHRMSUserRecon : execute() :
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================[[

    When we tried to run 'SAP HRMS User Recon' schedule task by using a flat file generated from SAP HR system, we are facing an error 'com.sap.conn.jco.AbapException: (126) SEGMENT_UNKNOWN: SEGMENT_UNKNOWN'.
    The complete stack trace is as below :
    [2013-08-21T05:34:16.480+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.common.parser.HRMDAParser : getSchema() : SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================[[
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ================= Start Stack Trace =======================
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.common.parser.HRMDAParser : getSchema()
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] Description : SEGMENT_UNKNOWN
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] com.sap.conn.jco.AbapException: (126) SEGMENT_UNKNOWN: SEGMENT_UNKNOWN Message 257 of class EA type E, Par[1]: Z1P9200, Par[2]: 731[[
    at com.sap.conn.jco.rt.MiddlewareJavaRfc$JavaRfcClient.execute(MiddlewareJavaRfc.java:1807)
    at com.sap.conn.jco.rt.ClientConnection.execute(ClientConnection.java:1120)
    at com.sap.conn.jco.rt.ClientConnection.execute(ClientConnection.java:953)
    at com.sap.conn.jco.rt.RfcDestination.execute(RfcDestination.java:1191)
    at com.sap.conn.jco.rt.RfcDestination.execute(RfcDestination.java:1162)
    at com.sap.conn.jco.rt.AbapFunction.execute(AbapFunction.java:302)
    at oracle.iam.connectors.sap.common.parser.HRMDAParser.getSchema(Unknown Source)
    at oracle.iam.connectors.sap.hrms.tasks.SAPHRMSUserRecon.execute(Unknown Source)
    at com.thortech.xl.scheduler.tasks.SchedulerBaseTask.execute(SchedulerBaseTask.java:384)
    at oracle.iam.scheduler.vo.TaskSupport.executeJob(TaskSupport.java:145)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    at java.lang.reflect.Method.invoke(Method.java:611)
    at oracle.iam.scheduler.impl.quartz.QuartzJob.execute(QuartzJob.java:196)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
    [2013-08-21T05:34:16.485+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ================= End Stack Trace =======================
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] oracle.iam.connectors.sap.hrms.tasks.SAPHRMSUserRecon : execute() :
    [2013-08-21T05:34:16.488+02:00] [oim] [ERROR] [] [OIMCP.SAPH] [tid: OIMQuartzScheduler_Worker-5] [userId: oiminternal] [ecid: 0000K0GBkw2EWN05zzP5iW1Hvuam000002,0] [APP: oim#11.1.1.3.0] ====================================================[[

  • Converting Idoc flat file representation to XML

    Hi ,
    I went through the guide for How To Convert Between IDoc and XML in XI 3.0. I'm concerned with the second part of the guide which says convert from falt file representation of Idoc to XML. Can anyone tell me what are the other design and configuration objects to be created for this scenario ( message types,interfaces, mapping , etc )
    Also which step of the pipeline does the converted XML goes to ?
    The program also expects a filename, what if I want to pass the file name dynamically ? Any ideas on this one.
    Hope someone replies this time.........:)
    Thanks for you help and improving my knowledge
    Thanks
    Advait Gode.

    Hi Advait,
    Let me give you a small overview on how inbound IDOCs work before answering your question-
    The control record is the key in identifying the routing of the IDOC. If you try to think IDOCs as normal mails(post), the control record is the envolope. It contains information like who the sender is and who the receiver should be and what the envelope contains (no different than receiving mails/letters by post).
    Then the data records contain the actual data, in our example would be the actual letter. The status records contain the tracking information.
    Traditionally SAP's IDOC interface (even before XI comes in picture) has utility programs to post incoming IDOCs in to SAP. One such program is RSEINB00 which basically takes  the IDOC file name and the port as input. This program opens the file and posts the contents to the SAP IDOC interface (which is a set of function modules) via the port. The idea is to read the control record and determine the routing and further posting to application. Note that one information in the control record is the message type/idoc type which decides how the data records need to be parsed.
    Now in XI scenario, what happens if we receive data as flat file? Normally, we use flat file adapter and in the file adapter we provide information on how to parse the file. But, if the incoming file is flat and in IDOC structure, why do we have to configure the file adapter, when the parsing capability is already available using RSEINB00/Standard IDOC interface.
    This the reason, the guide suggests you to use RSEINB00. Now, your concern is what if you need to provide a dynamic filename. My idea is to write a wrapper program. This would be an ABAP program in your integration engine. This program will determine the file name (based on a logic which should be known to you) and then call program RSEINB00 using a SUBMIT/RETURN. You would then schedule this ABAP program in background to run in fixed schedules.
    There are other ways of handling your scenario as well but from limited information from your request, I will stop with this now. Post me if you have any more queries.
    KK

  • J2SE XML to Flat File Content Conversion

    Hi
    I've currently got a scenario which sends a flat file to the integration server, it gets mapped and sent to a receiver adapter on a deployed j2se engine.
    I'm now trying to convert the xi-xml structured file to a flat file again on the j2se side (the same flat file format it originally had).
    My original flat file looks like this -
    477
    477=AA1
    My xml file looks like this -
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:ResultMessage xmlns:ns0="urn:xxxx-com:a_test_j2se_filetofile">
    <Item>
              <field1>477</field1>
    </Item>
    <Item>
              <field1>477</field1>
              <field2>AA1</field2>
    </Item>
    </ns0:ResultMessage>
    I am using these content conversion parameters:
    xml.addHeaderLine=0
    xml.fieldSeparator==
    xml.endSeparator='nl'
    I get this error on the integration engine (sxmb_moni):
    Error while sending by HTTP (error code: 500, error text: Internal Server Error:java.lang.NullPointerException) (See attachment HTMLError for details)
    and the j2se adpater log says this:
    17:16:32 (4120): Message "13b9d644-54c9-4ffb-0c40-db4c14458d77" of type "application/xml", kind "B" received
    17:16:32 (4124): Parsing XML message
    17:16:32 (4131): ERROR: Message processing failed with "java.lang.NullPointerException"
    What am i missing?

    So, now... I did a test configuration in XI and sent your test-payload...it worked.
    The J2SE adapter configuration:
    File adapter java class
    classname=com.sap.aii.messaging.adapter.ModuleXMB2File
    version=30
    mode=XMB2FILEWITHCONVERSION
    #Adress for XMB endpoint -
    XI.httpPort=8111
    XI.httpService=/file/receiver
    #File Adapter specific parameters -
    file.createDir=1
    file.targetDir=c:/transfer/inbound
    file.targetFilename=xmboutput.txt
    #file.writeMode=append
    #file.writeMode=overwrite
    file.writeMode=addCounter
    file.counterMode=immediately
    #file.counterMode=afterFirst
    file.counterSeparator=_
    file.counterFormat=00000
    file.counterStep=1
    #File Content Conversion specific parameters -
    xml.addHeaderLine=0
    xml.fieldSeparator==
    xml.endSeparator='nl'
    And here the configuration of the receiver communication channel in the integration directory:
    Adapter Type: XI
    Receiver
    Transport-Protocol: HTTP 1.0
    Message-Protocol: XI 3.0
    Adapter-Engine: Integration Server
    Adressing-Type: URL Address
    Target Host: <yourJ2SEip>
    Service Number: 8111
    Path Prefix:http://<yourJ2SEip>:8111/file/receiver
    Authentication Data
    Logon data for non-SAP systems
    User
    Password
    That's it... sent your payload and got the wished result:
    477
    477=AA1
    Regards,
    Heinrich

  • I can't build an xsd for a flat file (txt) to handle repeating records

    Hi - have looked at many posts around flat file schema and they don't seem to address my question.
    I have a flat file that is \n delimited
    the pattern of the data is simple:
    record1 - 90 characters
    record2 - 20 characters
    record3 - n 248 characters - each of these records is parsed into children by the positional method
    record n+1 10 characters
    record n+2 20 characters
    so I used the flat file schema generator to generate the schema and built a map mapping the flat file schema to another xml schema. The schema looks ok - record1, record2, record n+1, record n+2 are child elements of the root. the repeating record
    section is showing up as a node with the parsed children.
    The transform is only mapping the children of the repeating records. When I test the map only the first repeating record gets parsed. No repeating happens (the actual flat file has 400+ repeating records). When I run the map in debug mode, the input
    xml shows that record1 is read in correctly, record2 is read in correctly, record3 is read in and parsed and record4 is treated like record n+1 and record5 is treated like record n+2 and the map thinks it's all finished.
    the section of the repeat part of the schema is and you can see that I set the minOccurs=1 and maxOccurs=unbounded for the node (INVOICE) and the complexType but this is not an affective syntax. I have looked at how the EDI X12 schema look and how they handle
    looping and it is a lot different than what the Flat File schema wizard is doing. Is there a good set of rules published that would guide me though this? otherwise I will basically have to read in the lines from the file and parse them out with functoids -
    seems so inelegant. Thanks in advance.
    <xs:element minOccurs="1" maxOccurs="unbounded" name="INVOICE">
              <xs:annotation>
                <xs:appinfo>
                  <b:recordInfo structure="positional" sequence_number="3" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false"
    />
                </xs:appinfo>
              </xs:annotation>
              <xs:complexType>
                <xs:sequence minOccurs="1" maxOccurs="unbounded">
                  <xs:annotation>
                    <xs:appinfo>
                      <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003"
    />
                    </xs:appinfo>
                  </xs:annotation>
                  <xs:element name="SegmentType" type="xs:string">
                    <xs:annotation>
                      <xs:appinfo>
                        <b:fieldInfo justification="left" pos_offset="0" pos_length="2" sequence_number="1" />
                      </xs:appinfo>
                    </xs:annotation>
                  </xs:element>....... more children elements
    Harold Rosenkrans

    Thanks for responding
    I gave up trying to parse the repeating record into fields. Instead I just loop through the repeating record section with an <xs:for-each> block in the xsl and use functoids to grab the fields.
    So that works for having the two, shorter header records (structure is positional) before the section of repeating records. Now I just have to figure out how to get the schema to handle the two, shorter trailer (or footer, whatever you prefer) records after
    the section of repeating records
    the error I get in VS when I test the map is [BTW I changed the element names in the schema which is why you don't see INVOICE in the error]
    When I declare the last element as being positional with a character length of 10 I get the error:
    Error 18 Native Parsing Error: Unexpected end of stream while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 1359. The line number where the error occured is 9. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records (5 in the file) are 248 char and the last record is 10 char
    so an offset of 1359 puts it beyond the last record by 16 characters - so the stream reader is looking for the next repeating record.
    if I try to declare the last element as delimited I get the error:
    Error 14 Native Parsing Error: Unexpected data found while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 597. The line number where the error occured is 5. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records are 248 char.
    a stream offset of 597 puts me 8 characters into the third repeating record - at this point I have only declared one trailer record in the  schema, 10 characters long.
    Why is stream reader stopping at such a weird spot?
    The bottom line is I still haven't discovered the correct schema to handle the trailer records. even if I set the maxOccurs="4" (for the repeat record declaration) it still gets the first error. How does it find an unexpected end of stream looking
    for \r\n when the maxOccurs for the repeat record declaration should have the stream pointer in the 5th repeat record.
    I unfortunately don't have any options concerning the file structure.
    I have read a lot of posts concerning the trailer issue. I have seen a couple that looked interesting. I guess I'll just have to give them a try. The other option is to create a custom pipeline that will only take file lines of 248 characters.
    That's just disgusting !
    Harold Rosenkrans

  • Error in Flat File to XML conversion

    Hi all,
    I am trying to convert a flat file to XML using the Sender File Adapter and I am getting the following error message.
    2006-01-23 17:23:00 EST: Error: Conversion of complete file content to XML format failed around position 0: Exception: ERROR converting document line no. 2 according to structure 'GL_FileUpload_SAPECC_Header_DT1':java.lang.Exception: ERROR in configuration: more elements in file csv structure than field names specified!
    My flat file looks like this,
    --Start
    GL,GLI,1,RefTest,4011,Test,1234567890,12032005,12032005,GL,RK
    GL,GLI,4011,3011,,,,,,AU,600,7000,8000,9000,5000,RK,,,,,,,,,,,,,,,,,,,,
    ---End
    The adapter configuration is like this:
    Document Name: GL_FileUpload_SAPECC_Item_MT1
    Document Namespace: urn:corptech.qld.gov.au:sss_std_offering:gl
    RecordSet Name: GL_FileUpload_SAPECC_Record_DT1
    RecordSet Namespace: urn:corptech.qld.gov.au:sss_std_offering:gl
    RecordSet Structure: GL_FileUpload_SAPECC_Header_DT1,1,GL_FileUpload_SAPECC_Item_DT1,*
    RecordSet Sequence: Ascending
    Key FieldName: TransType
    On the Adapter Properties, I have got:
    --Start
    GL_FileUpload_SAPECC_Header_DT1.fieldNames: TransType,RowType,SequenceNo,ReferenceKey,SenderSystem,HeaderText,CompanyCode,DocumentDate,PostingDate,DocumentType,ReferenceNo
    GL_FileUpload_SAPECC_Header_DT1.fieldSeparator: ,
    GL_FileUpload_SAPECC_Item_DT1.fieldNames: TransType,RowType,SequenceNo,GLAccount,CostCentre,ProfitCentre,InternalOrder,WBSElement,TaxCode,Currency,GLAmount,VendorAmount,CustomerAmount,AssetAmount,DRCR,ItemText,VendorNo,CustomerNo,Name,Street,City,PostCode,PoBox,State,Country,BankKey,BankAccount,BankCountry,CalcTax,PaymentTerms,BaseDate,PaymentBlock,PaymentMethod,Assignment,AssetNo,AssetSubNo,AssetTransaction
    GL_FileUpload_SAPECC_Item_DT1.fieldSeparator: ,
    ignoreRecordsetName: true
    GL_FileUpload_SAPECC_Header_DT1.keyFieldValue: GL
    GL_FileUpload_SAPECC_Item_DT1.keyFieldValue: GL
    ---End
    The structure defined on the data type looks like this,
    --Start
    GL_FileUpload_SAPECC_Record_DT1   Complex Type
       GL_FileUpload_SAPECC_Header_DT1  Element
          TransType                     Element
          RowType                       Element
          Sequence Number               Element
        GL_FileUpload_SAPECC_Item_DT1   Element
          TransType                     Element
          RowType                       Element
          Sequence Number               Element
          GLAccount                     Element
    ---End
    Any help or suggestion please.
    Thank you.
    Warm Regards,
    Ranjan

    Hi, Ranjan.
      First of all, let's look at the meaning of the error.
    > ...Exception: ERROR converting document line no. 2 according to
    > structure 'GL_FileUpload_SAPECC_Header_DT1':java.lang.Exception:
    > ERROR in configuration: more elements in file csv structure than
    > field names specified!
      It seems that XI interpreted 2nd line as
    Header_DT1 not as Item_DT1 that you meant.
    >  GL,GLI,4011,3011,,,,,,AU,600,7000,8000,9000,5000,RK,,,,,,,,,,,,,,,,,,,,
      That's why it says this line has more elements than the structure
    defined(Header_DT1)
      And the reason why XI misinterpreted the above as Header is that
    you used keyFieldValue with the same value.
    > ...Header_DT1.keyFieldValue: GL
    > ...Item_DT1.keyFieldValue: GL
      According to the following help,
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    it says like the following.
    Key Field Name
    If you specified a variable number of substructures for Recordset
    Structure, in other words, at least one substructure has the value
    ‘*’, then the substructures must be identified by the parser from
    their content. This means that a key field must be set with different
    constants for the substructures. In this case, you must specify a key
    field and the field name must occur in all substructures.
    How about using different constants for header and item if possible?
    Good luck.

Maybe you are looking for