How to parse a flat file with C#
I need to parse a flat file with data that looks like
01,1235,555
02,2135,558
16,156,15614
16,000,000
You get the idea. Anyway, I'd like to just used a derived column and move on except I need to put a line number on each row as it comes by so the end looks like,
1,01,1235,555
2,02,2135,558
3,16,156,15614
4,16,000,000
I'm trying to do with a script transformation but I can't seem to get the hang of the syntax. I've tried looking at various examples but everybody seems to prefer VB and I'd like to keep all of my packages C#. I've set up my input and my output columns I just
need to figure out how to write the code that says something like:
row_number = 1
line_number = row_number
record_type = input.split.get the second data element
data_point_1 = input.split.get the third data element
row_number = row_number ++
/* Microsoft SQL Server Integration Services Script Component
* Write scripts using Microsoft Visual C# 2008.
* ScriptMain is the entry point class of the script.*/
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
private int rowCounter = 0;
// Method that will be started before the rows start to pass
public override void PreExecute()
base.PreExecute();
// Lock variable for read
VariableDispenser variableDispenser = (VariableDispenser)this.VariableDispenser;
variableDispenser.LockForRead("User::MaxID");
IDTSVariables100 vars;
variableDispenser.GetVariables(out vars);
// Fill the internal variable with the value of the SSIS variable
rowCounter = (int)vars["User::MaxID"].Value;
// Unlock variable
vars.Unlock();
// Method that will be started for each record in you dataflow
public override void Input0_ProcessInputRow(Input0Buffer Row)
// Seed counter
rowCounter++;
// Fill the new column
Row.MaxID = rowCounter;
Here is a script to get an incremental ID. On the ReadWriteVariables of the script add the "User::MaxID" variables to get the last number. On the Inputs and Outputs tab, create an output column here in the code it's MaxID numeric data types.
Similar Messages
-
How to load a flat file with utf8 format in odi as source file?
Hi All,
Anybody knows how we can load a flat file with utf8 format in odi as source file.Please everybody knows to guide me.
Regards,
SaharCould you explain which problem are you facing?
Francesco -
How to load a flat file with lot of records
Hi,
I am trying to load a flat file with hundreds of records into an apps table. when i create the process and deploy it onto the console it asks for an input in an html form. why does it ask for an input when i have specified the input file directory in my process? is there any way around tis where in it just reads all the records from the flat file directly??is custom queues anyway related to what I am about to do?any documents on this process will be greatly appreciated.If any one can help me on this it will be great. thank you guys....After deploying it, do you see if it is active and the status is on from the BPEL console BPEL Process tab? It should not come up to ask for input unless you are clicking it from the Dashboard tab. Do not click it from the Dashboard. Instead you should put some files into the input driectory. Wait few seconds you should see the instances of the BPEL process is created and start to process the files asynchrously.
-
How to parse a big file with Regex/Patternthan
I would parse a big file by using matcher/pattern so i have thought to use a BufferedReader.
The problem is that a BufferedReader constraints to read
the file line by line and my patterns are not only inside a line but also at the end and at the beginning of each one.
For example this class:
import java.util.regex.*;
import java.io.*;
public class Reg2 {
public static void main (String [] args) throws IOException {
File in = new File(args[1]);
BufferedReader get = new BufferedReader(new FileReader( in ));
Pattern hunter = Pattern.compile(args[0]);
String line;
int lines = 0;
int matches = 0;
System.out.print("Looking for "+args[0]);
System.out.println(" in "+args[1]);
while ((line = get.readLine()) != null) {
lines++;
Matcher fit = hunter.matcher(line);
//if (fit.matches()) {
if (fit.find()) {
System.out.println ("" + lines +": "+line);
matches++;
if (matches == 0) {
System.out.println("No matches in "+lines+" lines");
}used with the pattern "ERTA" and this file (genomic sequence) :
AAAAAAAAAAAERTAAAAAAAAAERT [end of line]
ABBBBBBBBBBBBBBBBBBBBBBERT [end of line]
ACCCCCCCCCCCCCCCCCCCCCCERT [end of line]
returns it has found the pattern only in this line
"1: AAAAAAAAAAAERTAAAAAAAAAERT"
while my pattern is present 4 times.
Is really a good idea to use a BufferedReader ?
Has someone an idea ?
thanx
Edited by: jfact on Dec 21, 2007 4:39 PM
Edited by: jfact on Dec 21, 2007 4:43 PMQuick and dirty demo:
import java.io.*;
import java.util.regex.*;
public class LineDemo {
public static void main (String[] args) throws IOException {
File in = new File("test.txt");
BufferedReader get = new BufferedReader(new FileReader(in));
int found = 0;
String previous = "", next = "", lookingFor = "ERTA";
Pattern p = Pattern.compile(lookingFor);
while((next = get.readLine()) != null) {
String toInspect = previous+next;
Matcher m = p.matcher(toInspect);
while(m.find()) found++;
previous = next.substring(next.length()-lookingFor.length());
System.out.println("Found '"+lookingFor+"' "+found+" times.");
/* test.txt contains these four lines:
AAAAAAAAAAAERTAAAAAAAAAERT
ABBBBBBBBBBBBBBBBBBBBBBERT
ACCCCCCCCCCCCCCCCCCCCCCERT
ACCCCCCCCCCCCCCCCCCCCCCBBB
*/ -
How to cretae two flat file with a single program
Hi All,
I am trying to creating to two files on the application server using the open data set and close data.
lets say the first file is file1 and second file is file2. but when i go to the Tcode AL11 and check the file only second file is appears there.
its may be coz i m using two times this open data and close data in my program.
can u tell help me howz these two file is appears on the application server i.e AL11.
its very urgent pls help me.
Thanks!
VipinHi Do one thing,
Start ur program in debugging mode complete first open data set and close data set and then go and chekc whehter the file got created or not and also check the sy-subrc values when u r doing ur open data set.
if the first file is not done then no file will be in AL11 including ur new file name.
In this way you can find whether the file got created or not.
check that u r giving differnet file names fifferently otherwise it will be keep on overwriting the existing one.
Regards,
sasi -
How to handle flat file with variable delimiters in the file sender adapter
Hi friends,
I have some flat files in the ftp server and hope to poll them into XI, but before processing in XI, I hope to do some content conversion in the file sender adapter, according to the general solution, I just need to specify the field names, field seperator, end seperator, etc. But the questions is:
The fileds in the test data may have different amount of delimiters (,), for example:
ORD01,,,Z4XS,6100001746,,,,,2,1
OBJ01,,,,,,,,,,4,3
Some fileds only have 1 ',' as the delimiter, but some of them have multiple ','.
How can I handle it in the content conversion?
Regards,
BeanHi Bing,
Plz do refer the following blogs u will get an idea:
File content conversion Blogs
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
/people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
/people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
/people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
/people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
/people/jeyakumar.muthu2/blog/2005/11/29/file-content-conversion-for-unequal-number-of-columns
/people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
/people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
Regards,
Vinod. -
How to load unicode data files with fixed records lengths?
Hi!
To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
Alternative 1: one record per row
SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode.dat
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001111112234444
01NormalDExZWEI
02ÄÜÖßêÊûÛxöööö
03ÄÜÖßêÊûÛxöööö
04üüüüüüÖÄxµôÔµ Alternative2: variable length records
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode_var.dat "VAR 4"
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
Implementing these two alternatives in OWB, I encounter the following problems:
* How to specify LENGTH SEMANTICS CHAR?
* How to suppress the POSITION definition?
* How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
Or is there another way that can be implemented using OWB?
Any help is appreciated!
Thanks,
Carsten.Hi Carsten
If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
Cheers
David -
Flat file with fixed lengths to XI 3.0 using a Central File Adapter---Error
Hi
According to the following link
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
In Adapter Monitor I got the following error,
In sender Adapter,
Last message processing started 23:47:35 2008-10-25, Error: Conversion of complete file content to XML format failed around position 0 with java.lang.Exception: ERROR converting document line no. 1 according to structure 'Substr':java.lang.Exception: Consistency error: field(s) missing - specify 'lastFieldsOptional' parameter to allow this
last retry interval started 23:47:35 2008-10-25
length 15,000 secs
some one help me out ?
Thanks
Ramfrom the blog you referenced -
<u> /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
<b>goto step 4</b>
<u>additional parameters</u>
add as the last entry
<recordset structure>.lastFieldsOptional Yes
e.g.,
Substr.lastFieldsOptional Yes -
Send a flat file with fixed lengths to XI 3.0 using a Central File Adapter?
Hello,
I'm wondering if someone have experience setting up conversion for different record structures. The example shown,
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter,
(in a greate way) only picture one kind of structure.
How should it be done if the file would contain
10Mat1
20100PCS
The first record structure has columns
ID(2),Material(10)
The second redcord structure has columns
ID(2),Quantity(3), Unit of messure (3)
Brgds
Kalle
Message was edited by: Karl BergstromThe configuration would be like follows:
Content Conversion Parameters:
Document Name: <your message type name>
Document Namespace: <your message type namespace>
Document Offset: <leave empty>
Recordset Name: <any name>
Recordset Namespace: <leave empty>
Recordset Structure: row1,,row2,
Recordset Sequence: any
Recordsets per Message: *
Key Field Name: ID
Key Field Type: String
Parameters for Recordset Structures:
row1.fieldNames ID,Material
row1.fieldFixedLengths 2,10
row1.keyFieldValue 10
row2.fieldNames ID,Quantity,UOM
row2.fieldFixedLengths 2,3,3
row2.keyFieldValue 20
Instead of row1 and row2 you can choose any name.
Regards
Stefan -
Flat file with chinese characters
Hi all,
I am working on a solution to map a file with this structure (not xml):
//.comment 1
0~keyh2..hn~
0~it1it2..itn~key
0~it1it2..itn~key
//.comment 2
0~keyh2..hn~
0~it1it2..itn~key
0~it1it2..itn~key
0~it1it2..itn~key
This is my conversion setup
recordser.structure = comment,1,header,1,item,*
recordset.sequence = variable
keyFieldName = key
comment.fieldSeparator = .
comment.fieldStructure = key.comment
comment.keyFieldValue = //
header.fieldSeparator = ~~
header.beginSeparator = 0~~
header.endSeparator = ~~
header.fieldStructure = 0~keyh2..hn~
header.keyFieldValue = 0
item.fieldSeparator = ~~
item.beginSeparator = 0~~
item.fieldStructure = 0~it1it2..itn~key
item.keyFieldValue = 1
The problem now is that this file come from Chinese system and it is provide with chinese letters (looks like 2 bytes per letter). When I provide character encoding as ISO-2022 then adapter shows exception:
java.io.UnsupportedEncodingException
when I try to process it without passing any encoding then the exception is:
more elements in file csv structure than field names specified
Is there anyone who can help me with these?
br
DawidHi,
I think it is something wrong with the File content conversion parameters parameters.
You can avoid comment 1 and comment 2 by using the parameter Document offset.Follow this link for that:
<a href="http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm">http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm</a>.
I think you didn't specify the field names in the File Content Conversion parameters.
Follow these two weblogs for the File Content Conversion parameters:
<a href="/people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30:///people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
<a href="/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter">/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter</a>
Hope it helps.
Regards,
JE -
How can we load a flat file with very, very long lines into a table?
Hello:
We have to load a flat file with OWB. The problem is that each of the lines in the file might be up to 30.000 characters long (up to 1.000 units of information in each line, 30 characters long each)
Of course, our mapping should insert these units of information as independent rows in a table (1.000 rows, in our example).
We do not know how to go about it. We usually load flat files using table functions, but we am not sure that they will be able to cope with these huge lines. And how should we pivot those lines? Will the Pivot operator do the trick? Or maybe we should pivot those lines outside the database before loading them?
We are a bit lost. Any suggestion would be appreciated.
Regards
Edited by: [email protected] on Oct 29, 2008 8:43 AM
Edited by: [email protected] on Oct 29, 2008 8:44 AMYes, well, we could define a 1.000 column external table, and then map those 1.000 columns to the Pivot operator… perhaps it would work. But we have been investigating a little bit, and we think that we have found a better solution: there is a unix utility called “fold”. This utility can split our 30.000 character lines in 1.000 lines, 30 characters long each: just what we needed. Then we can load the resulting file using an external table.
We think this is a much better solution that handling 1.000 columns in the external table and in the Pivot operator.
Thanks for your help.
Regards
Edited by: [email protected] on Oct 29, 2008 10:35 AM -
How to generate blank spaces at end of the record in a flat file with fixed
Hi,
I am generating a flat file with fixed length.
In my ABAP program, i am able to see the spaces at the end of the recors in debug but when download to applicaiton server i am not able to see those spaces.
How can i generate blank spaces at the end of the record in a flat file?
Please update
Thank youHow are you downloading the file? And, How are you looking at the file on the application server?
Can you provide snippets of your code?
Cheers
John -
How to create flat file with fixed lenght records
I need help to export an Oracle table to a flat file with fixed lenght and without columns separator.
the fixed length is the more important demand.
My table have 50 columns with varchar, date and number .
Date and number columns may be empty, null o with values.
Thanks a lot for any help.
[email protected]Hi,
You can use this trick:
SQL>desc t
Name Null? Type
NAME VARCHAR2(20)
SEX VARCHAR2(1)
SQL>SELECT LENGTH(LPAD(NAME,20,' ')||LPAD(SEX,1,' ')), LPAD(NAME,20,' ')||LPAD(SEX,1,' ') FROM T;
LENGTH(LPAD(NAME,20,'')||LPAD(SEX,1,'')) LPAD(NAME,20,'')||LPA
21 aF
21 BM
21 CF
21 DM
4 rows selected.
SQL>SELECT * FROM t;
NAME S
a F
B M
C F
D M
4 rows selected.Regards -
Flat file with IDOC structure on sender side - how to convert
Hello,
I have a flat file with an IDOC structure in it, like produced from R/3.
Example:
EDI_DC40 4000000000000761577620 3014 ORDERS05 ORDRSP SAPR3P LS LS_R3P_302 DATEIPORT2KMBG0000019887 20061206095508 20061206095508
E2EDK01005 400000000000076157700000100000001004 EUR 1.00000 0039 DFUE0000543012 12Muenster/Weststr.0000019887
E2EDK14 40000000000007615770000020000000200630
E2EDK14 40000000000007615770000030000000200710 ......
I need to send this file via XI to an R/3 system as an ORDERSP IDOC.
My question is:
What is the easiest way to do this?
My intention is to us a self programmed mapping!
Best regards
Dirk
nullHi Shabarish,
I found a sinple solution:
File adapter picks up file from external drive
Receiver adapter saves it on XI server directory
We21 can pick up now from here!
But, next problem:
When trying to save the file port in We21 (like described in the guide) I get a message: Port incomplete, not possible to save! (mess. is green -> information).
But I cannot save the port. What is wrong?
Hm, possibly reason was that inbound and outbound file name was the same! Now its OK! Port is created!
Regards
Dirk
Message was edited by:
Dirk Meinhard -
Infospoke flat file with tab seperator
Hello,
I have a infospoke which creates a flat file with comma field separator. I would like to have tab character as a field seperator. How would I change it.Varun,
as Ram told you can give ',' as Separator.
all the best.
Reagrds,
Nagesh Ganisetti.
Maybe you are looking for
-
Mail notification not showing up in "Notification Center"
I have few emails configured with Mail but when a new mail arrives it does not show up in the notification center. The notification center settings for "Mail" is set up as Alerts correctly. Should I make the "Mail" to start automatically after login?
-
Transfer SGTXT from Billing to Accounting
Hi All, Here we have a requirement like we need a text field(length more than 40 char) in VF01 transaction to enter Narration and that should be copied to SGTXT(Itemtext) field of corresponding accounting document. Presently we are using one long tex
-
Transports(bps objs)
Hi mates, Base cube(bw) is linked to Plnning cube(sem) via DTP & transformation and everyday data is to be updated in plnning cube. I have to design one process chain for this requirement which should be triggered at particular time. Base cube is alr
-
SMQ2 entries - PI dummy activities cannot have input/output nodes
Hi I've just completed an upgrade of APO from 30A to SCM5 SR2. I'm trying to reconcile differencies between R/3 and SCM with the DELTAREPORT3 (/sapapo/ccr). It keeps coming back with alot of entries for Planned Orders. When I try and reconcile (send
-
Link between DMS DIR and cFolders
Hi, We are creating documents directly in cFolders. After collaboration with users, we are pushing the final document to R/3 DMS. But we need to know, for which cFolder scenario this DMS document has been created. Is there any way we can find ou