Reading generic delimited files
Hi,
I have to read in a number of files (all CSV) and display them in a table...
I was wondering if it is possible to have a generic reader that can handle the different files, with some way to specify the data elements.
I basically dont want to create a new reader class / method for each of the data files that I have to read...
I'm not sure if this even makes sense..
Thanks,
Sandie
Sounds like you could store your column name to number mappings in a properties file, like
FirstName: 1
LastName: 2And then use that CSV Parser to do the work of parsing, and use your code to do the per-column processing, based on the numbers found in that properties file. If you always used those column number mappings in your code, then when a new column is dropped in that messes up your column count, you just adjust the properties file.
Sound reasonable?
Similar Messages
-
Hi,
What are the FMs that I can use to read comma delimited files. Also, will it be able to run in the background?
Thanks,
RTHi Rob
As far as i know, we can not upload data from
persentation server in background. For that the file
needs to be placed in application server and use Open dataset command.
Below is just an example to help you have a feel of the
same.
eg:
type: begin of t_data,
vbeln like vbak-vblen,
posnr like vbap-posnr,
matnr like vbap-matnr,
menge like vbap-menge,
end of t_data.
data: it_data type standard table of t_data.
data: wa_data type t_data.
data: l_content(100) type c.
open dataset p_file for output in text mode.
if sy-subrc ne 0.
*** error reading file.
else.
do.
read dataset p_file into l_content.
if sy-subrc ne 0.
close dataset p_file.
exit.
else.
split l_content at ',' into wa_data-vbeln
wa_data-posnr wa_data-matnr wa_data-menge.
append wa_data to it_data.
endif
enddo.
endif.
Hope this helps you.
Kind Regards
Eswar -
Read Tab delimited File from Application server
Hi Experts,
I am facing problem while reading file from Application server.
File in Application server is stored as follows, The below file is a tab delimited file.
##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
i have downloaded this file from Application server using Transaction CG3Y. the Downloaded file is a tab delimited file and i could not see "#' in the file,
The code is as Below.
c_split TYPE abap_char1 VALUE cl_abap_char_utilities=>horizontal_tab.
here i am using IGNORING CONVERSION ERRORS in order to avoid Conversion Error Short Dump.
OPEN DATASET wa_filename-file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS.
IF sy-subrc = 0.
WRITE : /,'...Processing file - ', wa_filename-file.
DO.
Read the contents of file
READ DATASET wa_filename-file INTO wa_file-data.
IF sy-subrc = 0.
SPLIT wa_file-data AT c_split INTO wa_adrc_2-kunnr
wa_adrc_2-title
wa_adrc_2-name1
wa_adrc_2-name2
wa_adrc_2-name3
wa_adrc_2-name4
wa_adrc_2-name_co
wa_adrc_2-city1
wa_adrc_2-city2
wa_adrc_2-regiogroup
wa_adrc_2-post_code1
wa_adrc_2-post_code2
wa_adrc_2-po_box
wa_adrc_2-po_box_loc
wa_adrc_2-transpzone
wa_adrc_2-street
wa_adrc_2-house_num1
wa_adrc_2-house_num2
wa_adrc_2-str_suppl1
wa_adrc_2-str_suppl2
wa_adrc_2-country
wa_adrc_2-langu
wa_adrc_2-region
wa_adrc_2-sort1
wa_adrc_2-sort2
wa_adrc_2-deflt_comm
wa_adrc_2-tel_number
wa_adrc_2-tel_extens
wa_adrc_2-fax_number
wa_adrc_2-fax_extens
wa_adrc_2-taxjurcode.
WA_FILE-DATA is having below values
##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
And this is split by tab delimited and moved to other variables as shown above.
Please guide me how to read the contents without "#' from the file.
I have tried all possible ways and unable to get solution.
Thanks,
ShrikanthHi ,
In ECC 6 if all the unicode patches are applied then UTF 16 will defintly work..
More over i would suggest you to ist replace # with some other * or , and then try to see in debugging if any further # appears..
and no # appears then try to split now.
if even now the # appears after replace statement then try to find out what exactly is it... wheather it is a horizantal tab etc....
and then again try to replace it and then split..
Please follow the process untill all the # are replaced...
This should work for you..
Let me know if you further face any issue...
Regards
Satish Boguda -
... A tough Task,... Reading Space Delimited File
Hi All,
We have to read 2 Files ( Both Either Space or Tab Delimited) text Files.
The records in the files are like ..
SchoolID Teacher1 Teacher 2 Teacher 3 Teacher4
SK001 TC001 TC002 TC003 TC004
SK002 TC001 TC002 TC003 TC004
I want to read and insert the records in School Tables which looks like
School ID Teacher ID .................
SK001 TC001
SK001 TC002
SK001 TC003
SK001 TC004
SK002 TC001
SK002 TC002
Had we want to insert record normally, we can use SQL* Loader which is a very effective and simple dataloading Tool ,
But since we need to store differently from the one we originaly have in the Text file , May i know any utility oracle offers to read Text based Files.
Thanx alot for your response and help
RegardsA tough task?
Designing and writing an o/s kernel is a tough task. Convincing the boss that the marleting and sales "account manager" from vendor ABC is talking utter bs is a tough task. Getting a J2EE developer to grok the fundamentals of RDBMS is a tough task. Cleaning the old lead pipe after extensive use is a tough task.
But this? I would not call it tough. If only most of the technical problems we face were this simplistic.
Never let a problem intimidate you. They're almost never that tough. -
Reading tab delimited file from application server
Hi All,
I do know that we need to use Open data set to read a file from application server, but my question is when you use read DATASET v_file into wa_final - this wa_final is an work area and also i have mentione an internal table. so do we need to Split the record at tab into the corresponding fields. Append the records into an internal table i_input.????
Please let me know on this....
thanks in advance....
Poonam....Hi,
first see the file contents in application server, how the contents whether the contents seperated by any symbol or not, if the contents seperated by any symbol then you have to split the data before appending to internal table.
check this code.
DATA: l_data_string TYPE string.
filename = p_file.
OPEN DATASET p_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc EQ 0.
DO.
READ DATASET filename INTO l_data_string.
IF sy-subrc NE 0.
EXIT.
ENDIF.
CLEAR k_input.
SPLIT l_data_string AT '#' INTO k_input-agreement k_input-suffix k_input-status
k_input-first_name k_input-last_name k_input-job_title k_input-tel k_input-fax k_input-email_address k_input-mob_number.
APPEND k_input TO i_input.
ENDDO.
ENDIF.
Regards,
Venu -
What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
Thanks in advance for your review and am hopeful for a reply.
ITBobbyP85You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
In the flat file connection manager, set the column delimiter to the pipe character. -
How to read and parse a comma delimited file? Help
Hi, does anyone know to read and parse a comma delimited file?
Should I use StreamTokenizer, StringTokenizer, or the oro RegEx packages?
What is the best?
I have a file that has several lines of data that is double-quoted and comma delimited like:
"asdfadsf", "asdfasdfasfd", "asdfasdfasdf", "asdfasdf"
"asdfadsf", "asdfasdfasfd", "asdfasdfasdf", "asdfasdf"
Any help would be greatly appreciated.
thanks,
Spackimport java.util.*;
import java.io.*;
public class ResourcePortalParser
public ResourcePortalParser()
public Vector tokenize() throws IOException
File reportFile = new File("C:\\Together5.5\\myprojects\\untitled2\\accessFile.txt");
Vector tokenVector = new Vector();
StreamTokenizer tokenized = new StreamTokenizer(new FileReader(reportFile));
tokenized.eolIsSignificant(true);
while (tokenized.nextToken() != StreamTokenizer.TT_EOF)
switch(tokenized.ttype)
case StreamTokenizer.TT_WORD :
System.out.println("Adding token - " + tokenized.sval);
tokenVector.addElement(tokenized.sval);
break;
return tokenVector;
[\code] -
Reading in a tab delimited file
I have the following table in oracle 9i:
arc_book
arc_id, arc_title, arc_publisher, arc_release_date
And a tab delimited file called smc_book. Here is how the smc_book tab delimited file looks like:
smc_id smc_title smc_publisher smc_release_date
1234 "Beautiful Wonder" "Wrox Books" 1/1/1999
2356 "Master PL/SQL" "OReilly Media" 6/5/2004
5432 "Harry Potter and Goblet of Fire" "Simon & Shuster" 2/4/2001
What I want to do is create a lookup table where I matchup the records that are the same book in the smc_book file and arc_book table.
So the lookup table would look like:
lookup
smc_id, arc_id, arc_title, arc_publisher
When I read in the smc_book file I don't want to re-add books that have already been added to the lookup table.
Also when I am matching up the books in the smc_book file and arc table I want to make sure it gets all similar book titles that have the same publisher and release date because the book title, publisher are not exactly the same in both table. For example with data:
smc_book file:
smc_id smc_title smc_publisher smc_release_date
1234 "Beautiful Wonder" "Wrox Books" 1/1/1999
2356 "Master PL/SQL" "OReilly Media" 6/5/2004
5432 "Harry Potter and Goblet of Fire" "Simon & Shuster" 2/4/2001
arc_book table:
arc_id arc_title arc_publisher arc_release_date
1245 "Wonder, Beautiful" "Wrox" 1/1/1999
1244 "The PL-SQL, Master" "Media, OReilly" 6/5/2004
4352 "Golbet of Fire, Harry Potter" "Simon and Shuster" 2/4/2001
So I want to match up "Beautiful Wonder" in the smc_book file with "Wonder, Beautiful" in the arc_book table even though the title and publisher data is not exact.
I would really appreciate it if someone could help me out.
Thanks
Robertwell you can use sql loader utility to load the data in your file in some temp table and use that temp table as a base for your comparisons,
other way around is to use external tables
sql loader:
http://www.orafaq.com/faqloadr.htm
http://www.oreilly.com/catalog/orsqlloader/chapter/ch01.html
external tables
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2153251 -
Reading Tab Delimited Text File
HI
I am having problem in reading Tab Delimited text file.
If i place some spaces in name of text file. it dosn`t read the file.
if there is a simple name without space, then it reads easily.
but when having space in file name then it shows nothing.
PLZ help me .......
give me some code or links to solution
thanks!Could you post up an example of the file? With a FedEx report file, I created an application to read each line and split the String into an array where ever a [tab] exists. Since the columns aren't evenly tabbed, I used a regular expression to replace any whitespace in a with a \t (tab). Now it takes that line and splits it into an array where the [tab] exists. Then I access that specific column in the String by line.
08/28/2007 FedEx Ground COLO2
23:16:29 LANE FULL REPORT reptClaneFulls
Page 1
Next Sorters
Load Main Auto Smalls
Chute Point Primary Secondary Primary Secondary
0101 8001 0 11 0 0
0102 5333 0 9 0 0
0104 0142 0 441 0 0
0106 0328 0 5 0 0
0107 0452 0 2 0 0
0110 0333 0 2 0 0
0113 0447 0 7 0 0
0114 0447 0 1 0 0
0115 0303 0 11 0 0
0127 0132 0 2 0 0
0128 0132 0 11 0 0
0129 0132 0 9 0 0
0130 0405 0 102 0 0
0131 0371 0 270 0 0
0132 0371 0 168 0 0
0133 0122 0 13 0 0
0134 0456 0 36 0 0
0135 0146 0 152 0 0
0136 0146 0 2 0 0
0138 0371 0 24 0 0
0201 0552 0 9 0 0
0204 0445 0 69 0 0
0205 0445 0 51 0 0
0207 0641 0 1 0 0
0211 0551 0 1 0 0
0212 0454 0 7 0 0
0213 3441 0 39 0 0
0216 0841 0 1 0 0
0217 0631 0 211 0 0
0222 0441 0 12 0 0
0223 0 5 0 0
0224 0441 0 9 0 0
0225 0441 0 42 0 0
0226 0441 0 11 0 0
0227 0441 0 5 0 0
0229 0619 0 753 0 0
0230 0619 0 188 0 0
0231 0602 0 2 0 0
0232 0604 0 91 0 0
0233 0604 0 3 0 0
0238 0601 0 1 0 0
0304 1435 0 12 0 0
0307 2430 0 477 0 0
0309 1430 0 98 0 0
0310 1430 0 1 0 0
0311 0971 0 1 0 0
0312 0449 0 19 0 0
0313 0449 0 128 0 0
0315 0923 0 31 0 0
0316 0981 0 11 0 0
0317 0972 0 9 0 0
08/28/2007 FedEx Ground COLO2
23:16:29 LANE FULL REPORT reptClaneFulls
Page 2
Next Sorters
Load Main Auto Smalls
Chute Point Primary Secondary Primary Secondary
0318 0972 0 6 0 0
0319 2431 0 1 0 0
0323 0436 0 9 0 0
0324 0 3 0 0
0326 3431 0 12 0 0
0328 0958 0 55 0 0
0332 0430 0 84 0 0
0333 0430 0 16 0 0
0334 4430 0 29 0 0
0337 0480 0 2 0 0
0343 0555 0 36 0 0
0405 1437 0 52 0 0
0406 1437 0 51 0 0
0407 3152 0 58 0 0
0408 3152 0 2 0 0
0410 0152 0 5 0 0
0411 0152 0 3 0 0
0415 0100 0 55 0 0
0417 0253 0 95 0 0
0420 0282 0 1 0 0
0421 0282 0 82 0 0
0422 0753 0 13 0 0
0425 0165 0 9 0 0
0426 0165 0 8 0 0
0427 0089 0 21 0 0
0428 0089 0 10 0 0
0434 0437 0 3 0 0
0436 0170 0 4 0 0
0441 0263 0 9 0 0
0442 0219 0 1 0 0
0443 0219 0 20 0 0
0444 3258 0 3 0 0
0447 0156 0 89 0 0
0448 0156 0 59 0 0
0449 0156 0 1 0 0
0450 0760 0 14 0 0
0451 3163 0 16 0 0
0453 0212 0 27 0 0
0454 7760 0 2 0 0
107A 0219 0 0 0 88
108A 0219 0 0 0 89
110A 7061 0 0 0 185
111A 7061 0 0 0 190
112A 0170 0 0 0 3
113A 0170 0 0 0 1
114A 0170 0 0 0 3
118A 0089 0 0 0 261
119A 0089 0 0 0 255
120A 0282 0 0 0 5
121A 0282 0 0 0 4
122A 0753 0 0 0 6
08/28/2007 FedEx Ground COLO2
23:16:29 LANE FULL REPORT reptClaneFulls
Page 3
Next Sorters
Load Main Auto Smalls
Chute Point Primary Secondary Primary Secondary
124A 3156 0 0 0 258
125A 0258 0 0 0 74
126A 3258 0 0 0 3
127A 0263 0 0 0 34
128A 3263 0 0 0 7
129A 0152 0 0 0 39
130A 0152 0 0 0 44
131A 0152 0 0 0 33
132A 3152 0 0 0 176
133A 3152 0 0 0 181
134A 0253 0 0 0 34
135A 0253 0 0 0 34
136A 3253 0 0 0 103
137A 0156 0 0 0 85
138A 0156 0 0 0 87
139A 0437 0 0 0 271
140A 3437 0 0 0 111
141A 0165 0 0 0 204
142A 3165 0 0 0 5
143A 0163 0 0 0 9
144A 3163 0 0 0 5
147A 7760 0 0 0 9
201A 8001 0 0 0 1
202A 8001 0 0 0 2
205A 3435 0 0 0 62
206A 0402 0 0 0 218
208A 0405 0 0 0 15
212A 0411 0 0 0 5
213A 0411 0 0 0 5
214A 3441 0 0 0 224
215A 3441 0 0 0 225
216A 0410 0 0 0 9
217A 0449 0 0 0 49
218A 0449 0 0 0 51
219A 3452 0 0 0 12
220A 0452 0 0 0 4
221A 0452 0 0 0 6
222A 3431 0 0 0 33
223A 3431 0 0 0 38
224A 2430 0 0 0 14
225A 2430 0 0 0 14
226A 4430 0 0 0 15
227A 4430 0 0 0 15
228A 0430 0 0 0 4
229A 0430 0 0 0 4
230A 1430 0 0 0 11
231A 1430 0 0 0 23
232A 0456 0 0 0 96
233A 7433 0 0 0 9
234A 0333 0 0 0 3
235A 7641 0 0 0 10
08/28/2007 FedEx Ground COLO2
23:16:29 LANE FULL REPORT reptClaneFulls
Page 4
Next Sorters
Load Main Auto Smalls
Chute Point Primary Secondary Primary Secondary
236A 0802 0 0 0 5
240A 0631 0 0 0 111
241A 0551 0 0 0 3
245A 0958 0 0 0 71
246A 0554 0 0 0 72
247A 0923 0 0 0 48
248A 0371 0 0 0 31
249A 0972 0 0 0 49
250A 0381 0 0 0 3
251A 0619 0 0 0 27
253A 0604 0 0 0 48
254A 0132 0 0 0 57
255A 0132 0 0 0 53
257A 0942 0 0 0 16
307A 0951 0 0 0 138
308A 0464 0 0 0 22
309A 0641 0 0 0 45
310A 0641 0 0 0 47
311A 0122 0 0 0 16
312A 0971 0 0 0 76
313A 0602 0 0 0 37
314A 0841 0 0 0 9
315A 0841 0 0 0 8
317A 0958 0 0 0 2
318A 0532 0 0 0 35
320A 0604 0 0 0 16
322A 0981 0 0 0 90
323A 0371 0 0 0 42
324A 0972 0 0 0 13
325A 0372 0 0 0 35
326A 0928 0 0 0 14
327A 0619 0 0 0 78
328A 0328 0 0 0 17
330A 0303 0 0 0 27
331A 0923 0 0 0 1
332A 0336 0 0 0 3
333A 7850 0 0 0 7
335A 0146 0 0 0 8
337A 0454 0 0 0 20
338A 3445 0 0 0 86
339A 0445 0 0 0 371
340A 1441 0 0 0 42
341A 2442 0 0 0 111
342A 0441 0 0 0 59
343A 1442 0 0 0 23
344A 0442 0 0 0 28
345A 7441 0 0 0 66
346A 4441 0 0 0 73
347A 2441 0 0 0 72
348A 3462 0 0 0 12
349A 0462 0 0 0 62
08/28/2007 FedEx Ground COLO2
23:16:29 LANE FULL REPORT reptClaneFulls
Page 5
Next Sorters
Load Main Auto Smalls
Chute Point Primary Secondary Primary Secondary
350A 0447 0 0 0 36
351A 0447 0 0 0 43
352A 3468 0 0 0 2
353A 0468 0 0 0 13
354A 0142 0 0 0 26
356A 0436 0 0 0 7
357A 0436 0 0 0 5
359A 0480 0 0 0 20
RLBL 0 47 0 0
SSBL 0 127 0 0
SSGN 0 32 0 0
SSRD 0 323 0 0
======================================================
TOTAL: 0 5071 0 6630 -
Attaching a tab delimited file to mail
Hi,
I have to attach a tab delimited file to a mail and sent it using Function Module 'SO_DOCUMENT_SEND_API1'.
I have filled in the following details but i am not sure about what should be given for 'lt_packing_list-doc_type'. Should it be a 'TXT 'or any other file format. Some one has suggested me a 'CSV' but i guess for that the data should be separated by a comma and i need a tab delimited file.
Please suggest me what needs to be done.
Thank you,
Taher
DATA:
con_tab TYPE c VALUE cl_abap_char_utilities=>horizontal_tab,
con_cret TYPE c VALUE cl_abap_char_utilities=>cr_lf.
LOOP AT gt_output INTO ls_output.
CONCATENATE ls_output-email
ls_output-pos
ls_output-txt
ls_output-func
ls_output-mail
INTO gt_attach SEPARATED BY con_tab.
CONCATENATE con_cret gt_attach INTO gt_attach.
CONDENSE gt_attach.
APPEND gt_attach.
* Fill the document data.
lv_xdocdata-doc_size = 1.
* Populate the subject/generic message attributes
lv_xdocdata-obj_langu = sy-langu.
lv_xdocdata-obj_name = 'SAPRPT'.
lv_xdocdata-obj_descr = 'Report' .
* Fill the document data and get size of attachment
CLEAR lv_xdocdata.
READ TABLE gt_attach INDEX lv_xcnt.
lv_xdocdata-doc_size =
( lv_xcnt - 1 ) * 255 + STRLEN( gt_attach ).
lv_xdocdata-obj_langu = sy-langu.
lv_xdocdata-obj_name = 'SAPRPT'.
lv_xdocdata-obj_descr = ' Report'.
CLEAR lt_attachment. REFRESH lt_attachment.
lt_attachment[] = gt_attach[].
* Describe the body of the message
CLEAR lt_packing_list. REFRESH lt_packing_list.
lt_packing_list-transf_bin = space.
lt_packing_list-head_start = 1.
lt_packing_list-head_num = 0. lt_packing_list-body_start = 1.
DESCRIBE TABLE lt_message LINES lt_packing_list-body_num.
lt_packing_list-doc_type = 'RAW'.
APPEND lt_packing_list.
* Create attachment notification
lt_packing_list-transf_bin = 'X'.
lt_packing_list-head_start = 1.
lt_packing_list-head_num = 1.
lt_packing_list-body_start = 1.
DESCRIBE TABLE lt_attachment LINES lt_packing_list-body_num.
lt_packing_list-doc_type = ?????. "TXT or any other format
lt_packing_list-obj_descr = 'Report'.
lt_packing_list-obj_name = 'Report'.
lt_packing_list-doc_size = lt_packing_list-body_num * 255.
APPEND lt_packing_list.
* Add the recipients email address
LOOP AT gt_receivers INTO ls_receivers.
CLEAR lt_receivers.
lt_receivers-receiver = ls_receivers.
lt_receivers-rec_type = 'U'.
lt_receivers-com_type = 'INT'.
APPEND lt_receivers.
ENDLOOP.Hi Taher,
Which table you populate your data to (CONTENTS_BIN,CONTENTS_TXT,CONTENTS_HEX) ?.
As both the message and attachment reside one table (contents_txt) in it_packing_list body of message should start with 1 line, but of an attachment when body of message ends. This is:
* Describe the body of the message
CLEAR lt_packing_list. REFRESH lt_packing_list.
lt_packing_list-transf_bin = space.
lt_packing_list-head_start = 1.
lt_packing_list-head_num = 0. lt_packing_list-body_start = 1.
DESCRIBE TABLE lt_message LINES lt_packing_list-body_num.
lt_packing_list-doc_type = 'RAW'.
APPEND lt_packing_list.
* Create attachment notification
lt_packing_list-transf_bin = 'X'.
lt_packing_list-head_start = 1.
lt_packing_list-head_num = 1.
lt_packing_list-body_start = "start of attachemnt with next line after body of message
Next thing is what I appointed before. Both should be passed as ASCII so in both lt_packing_list-transf_bin = space.
Please see my code.
CONSTANTS: con_cret(2) type c VALUE cl_abap_char_utilities=>cr_lf,
con_tab(2) TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.
DATA: doc_attr TYPE sodocchgi1,
it_packing_list TYPE TABLE OF sopcklsti1 WITH HEADER LINE,
it_message TYPE TABLE OF solisti1 WITH HEADER LINE,
it_attachment TYPE TABLE OF solisti1 WITH HEADER LINE,
it_txt TYPE TABLE OF solisti1 WITH HEADER LINE,
it_receivers TYPE TABLE OF somlreci1 WITH HEADER LINE.
"Document attributes
doc_attr-obj_name = 'EXT_MAIL'.
doc_attr-obj_descr = 'External test mail'. "title
doc_attr-obj_langu = sy-langu.
doc_attr-sensitivty = 'F'. "functional message
"Message in ASCII (txt) format
APPEND 'This is body message' TO it_message. "message is from 1 to 2
APPEND 'and second line for message.' TO it_message.
APPEND LINES OF it_message TO it_txt.
"Determine how data are distrtibuted to document and attachment
"First line in it_packing describes message body
it_packing_list-transf_bin = space. "ASCII format
it_packing_list-body_start = 1. "message body starts from 1st line
DESCRIBE TABLE it_message LINES it_packing_list-body_num. "lines in it_txt table for message body
it_packing_list-doc_type = 'RAW'.
APPEND it_packing_list.
"Attachment in ASCII (txt) format
CONCATENATE '1first' 'second' 'third' 'fourth' into it_attachment SEPARATED BY con_tab. "first line
CONCATENATE con_cret it_attachment.
APPEND it_attachment.
CONCATENATE '2first' 'second' 'third' 'fourth' into it_attachment SEPARATED BY con_tab. "second line
CONCATENATE it_attachment con_cret into it_attachment.
APPEND it_attachment.
APPEND LINES OF it_attachment TO it_txt.
"Further lines in it_packing describe attachment
CLEAR it_packing_list.
it_packing_list-transf_bin = space. "ASCII format
it_packing_list-body_start = 3.
DESCRIBE TABLE it_attachment LINES it_packing_list-body_num.
it_packing_list-doc_type = 'txt'.
it_packing_list-obj_name = 'Test attachment'.
it_packing_list-obj_descr = 'Title of an attachment'.
it_packing_list-obj_langu = sy-langu.
"size od attachment = length of last line + all remaining lines * 255
READ TABLE it_attachment INDEX it_packing_list-body_num.
it_packing_list-doc_size = STRLEN( it_attachment ) + 255 * ( it_packing_list-body_num - 1 ).
APPEND it_packing_list.
"Receivers
it_receivers-receiver = "some mail address.
it_receivers-rec_type = 'U'. "internet address
it_receivers-com_type = 'INT'. "send via internet
APPEND it_receivers.
CALL FUNCTION 'SO_NEW_DOCUMENT_ATT_SEND_API1'
EXPORTING
document_data = doc_attr
put_in_outbox = 'X'
commit_work = 'X'
* IMPORTING
* SENT_TO_ALL =
* NEW_OBJECT_ID =
TABLES
packing_list = it_packing_list
* OBJECT_HEADER =
* CONTENTS_BIN =
contents_txt = it_txt
* CONTENTS_HEX =
* OBJECT_PARA =
* OBJECT_PARB =
receivers = it_receivers.
* EXCEPTIONS
* TOO_MANY_RECEIVERS = 1
* DOCUMENT_NOT_SENT = 2
* DOCUMENT_TYPE_NOT_EXIST = 3
* OPERATION_NO_AUTHORIZATION = 4
* PARAMETER_ERROR = 5
* X_ERROR = 6
* ENQUEUE_ERROR = 7
* OTHERS = 8
IF sy-subrc = 0.
WAIT UP TO 2 SECONDS.
SUBMIT rsconn01 WITH mode = 'INT'
* WITH ouput = 'X'
AND RETURN.
ENDIF.
I am using SO01 t-code to see the document and attachemnt.
I am sure when you get rid of the differences with the codes you will get desired result.
Regards
Marcin -
Open and read from text file into a text box for Windows Store
I wish to open and read from a text file into a text box in C# for the Windows Store using VS Express 2012 for Windows 8.
Can anyone point me to sample code and tutorials specifically for Windows Store using C#.
Is it possible to add a Text file in Windows Store. This option only seems to be available in Visual C#.
Thanks
WendelThis is a simple sample for Read/Load Text file from IsolateStorage and Read file from InstalledLocation (this folder only can be read)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Windows.Storage;
using System.IO;
namespace TextFileDemo
public class TextFileHelper
async public static Task<bool> SaveTextFileToIsolateStorageAsync(string filename, string data)
byte[] fileBytes = System.Text.Encoding.UTF8.GetBytes(data);
StorageFolder local = Windows.Storage.ApplicationData.Current.LocalFolder;
var file = await local.CreateFileAsync(filename, CreationCollisionOption.ReplaceExisting);
try
using (var s = await file.OpenStreamForWriteAsync())
s.Write(fileBytes, 0, fileBytes.Length);
return true;
catch
return false;
async public static Task<string> LoadTextFileFormIsolateStorageAsync(string filename)
StorageFolder local = Windows.Storage.ApplicationData.Current.LocalFolder;
string returnvalue = string.Empty;
try
var file = await local.OpenStreamForReadAsync(filename);
using (StreamReader streamReader = new StreamReader(file))
returnvalue = streamReader.ReadToEnd();
catch (Exception ex)
// do somthing when exception
return returnvalue;
async public static Task<string> LoadTextFileFormInstalledLocationAsync(string filename)
StorageFolder local = Windows.ApplicationModel.Package.Current.InstalledLocation;
string returnvalue = string.Empty;
try
var file = await local.OpenStreamForReadAsync(filename);
using (StreamReader streamReader = new StreamReader(file))
returnvalue = streamReader.ReadToEnd();
catch (Exception ex)
// do somthing when exception
return returnvalue;
show how to use it as below
async private void Button_Click(object sender, RoutedEventArgs e)
string txt =await TextFileHelper.LoadTextFileFormInstalledLocationAsync("TextFile1.txt");
Debug.WriteLine(txt);
在現實生活中,你和誰在一起的確很重要,甚至能改變你的成長軌跡,決定你的人生成敗。 和什麼樣的人在一起,就會有什麼樣的人生。 和勤奮的人在一起,你不會懶惰; 和積極的人在一起,你不會消沈; 與智者同行,你會不同凡響; 與高人為伍,你能登上巔峰。 -
Upload tab-delimited file from the application server to an internal table
Hello SAPients.
I'm using OPEN DATASET..., READ DATASET..., CLOSE DATASET to upload a file from the application server (SunOS). I'm working with SAP 4.6C. I'm trying to upload a tab-delimited file to an internal table but when I try load it the fields are not correctly separated, in fact, they are all misplaced and the table shows '#' where supposedly there was a tab.
I tried to SPLIT the line using as separator a variable with reference to CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB but for some reason that class doesn't exist in my system.
Do you know what I'm doing wrong? or Do you know a better method to upload a tab-delimited file into an internal table?
Thank you in advance for your help.Try:
REPORT ztest MESSAGE-ID 00.
PARAMETER: p_file LIKE rlgrap-filename OBLIGATORY.
DATA: BEGIN OF data_tab OCCURS 0,
data(4096),
END OF data_tab.
DATA: BEGIN OF vendor_file_x OCCURS 0.
* LFA1 Data
DATA: mandt LIKE bgr00-mandt,
lifnr LIKE blf00-lifnr,
anred LIKE blfa1-anred,
bahns LIKE blfa1-bahns,
bbbnr LIKE blfa1-bbbnr,
bbsnr LIKE blfa1-bbsnr,
begru LIKE blfa1-begru,
brsch LIKE blfa1-brsch,
bubkz LIKE blfa1-bubkz,
datlt LIKE blfa1-datlt,
dtams LIKE blfa1-dtams,
dtaws LIKE blfa1-dtaws,
erdat LIKE lfa1-erdat,
ernam LIKE lfa1-ernam,
esrnr LIKE blfa1-esrnr,
konzs LIKE blfa1-konzs,
ktokk LIKE lfa1-ktokk,
kunnr LIKE blfa1-kunnr,
land1 LIKE blfa1-land1,
lnrza LIKE blfa1-lnrza,
loevm LIKE blfa1-loevm,
name1 LIKE blfa1-name1,
name2 LIKE blfa1-name2,
name3 LIKE blfa1-name3,
name4 LIKE blfa1-name4,
ort01 LIKE blfa1-ort01,
ort02 LIKE blfa1-ort02,
pfach LIKE blfa1-pfach,
pstl2 LIKE blfa1-pstl2,
pstlz LIKE blfa1-pstlz,
regio LIKE blfa1-regio,
sortl LIKE blfa1-sortl,
sperr LIKE blfa1-sperr,
sperm LIKE blfa1-sperm,
spras LIKE blfa1-spras,
stcd1 LIKE blfa1-stcd1,
stcd2 LIKE blfa1-stcd2,
stkza LIKE blfa1-stkza,
stkzu LIKE blfa1-stkzu,
stras LIKE blfa1-stras,
telbx LIKE blfa1-telbx,
telf1 LIKE blfa1-telf1,
telf2 LIKE blfa1-telf2,
telfx LIKE blfa1-telfx,
teltx LIKE blfa1-teltx,
telx1 LIKE blfa1-telx1,
xcpdk LIKE lfa1-xcpdk,
xzemp LIKE blfa1-xzemp,
vbund LIKE blfa1-vbund,
fiskn LIKE blfa1-fiskn,
stceg LIKE blfa1-stceg,
stkzn LIKE blfa1-stkzn,
sperq LIKE blfa1-sperq,
adrnr LIKE lfa1-adrnr,
mcod1 LIKE lfa1-mcod1,
mcod2 LIKE lfa1-mcod2,
mcod3 LIKE lfa1-mcod3,
gbort LIKE blfa1-gbort,
gbdat LIKE blfa1-gbdat,
sexkz LIKE blfa1-sexkz,
kraus LIKE blfa1-kraus,
revdb LIKE blfa1-revdb,
qssys LIKE blfa1-qssys,
ktock LIKE blfa1-ktock,
pfort LIKE blfa1-pfort,
werks LIKE blfa1-werks,
ltsna LIKE blfa1-ltsna,
werkr LIKE blfa1-werkr,
plkal LIKE lfa1-plkal,
duefl LIKE lfa1-duefl,
txjcd LIKE blfa1-txjcd,
sperz LIKE lfa1-sperz,
scacd LIKE blfa1-scacd,
sfrgr LIKE blfa1-sfrgr,
lzone LIKE blfa1-lzone,
xlfza LIKE lfa1-xlfza,
dlgrp LIKE blfa1-dlgrp,
fityp LIKE blfa1-fityp,
stcdt LIKE blfa1-stcdt,
regss LIKE blfa1-regss,
actss LIKE blfa1-actss,
stcd3 LIKE blfa1-stcd3,
stcd4 LIKE blfa1-stcd4,
ipisp LIKE blfa1-ipisp,
taxbs LIKE blfa1-taxbs,
profs LIKE blfa1-profs,
stgdl LIKE blfa1-stgdl,
emnfr LIKE blfa1-emnfr,
lfurl LIKE blfa1-lfurl,
j_1kfrepre LIKE blfa1-j_1kfrepre,
j_1kftbus LIKE blfa1-j_1kftbus,
j_1kftind LIKE blfa1-j_1kftind,
confs LIKE lfa1-confs,
updat LIKE lfa1-updat,
uptim LIKE lfa1-uptim,
nodel LIKE blfa1-nodel.
DATA: END OF vendor_file_x.
FIELD-SYMBOLS: <field>,
<field_1>.
DATA: delim TYPE x VALUE '09'.
DATA: fld_chk(4096),
last_char,
quote_1 TYPE i,
quote_2 TYPE i,
fld_lth TYPE i,
columns TYPE i,
field_end TYPE i,
outp_rec TYPE i,
extras(3) TYPE c VALUE '.,"',
mixed_no(14) TYPE c VALUE '1234567890-.,"'.
OPEN DATASET p_file FOR INPUT.
DO.
READ DATASET p_file INTO data_tab-data.
IF sy-subrc = 0.
APPEND data_tab.
ELSE.
EXIT.
ENDIF.
ENDDO.
* count columns in output structure
DO.
ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
IF sy-subrc <> 0.
EXIT.
ENDIF.
columns = sy-index.
ENDDO.
* Assign elements of input file to internal table
CLEAR vendor_file_x.
IF columns > 0.
LOOP AT data_tab.
DO columns TIMES.
ASSIGN space TO <field>.
ASSIGN space TO <field_1>.
ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
SEARCH data_tab-data FOR delim.
IF sy-fdpos > 0.
field_end = sy-fdpos + 1.
ASSIGN data_tab-data(sy-fdpos) TO <field_1>.
* Check that numeric fields don't contain any embedded " or ,
IF <field_1> CO mixed_no AND
<field_1> CA extras.
TRANSLATE <field_1> USING '" , '.
CONDENSE <field_1> NO-GAPS.
ENDIF.
* If first and last characters are '"', remove both.
fld_chk = <field_1>.
IF NOT fld_chk IS INITIAL.
fld_lth = strlen( fld_chk ) - 1.
MOVE fld_chk+fld_lth(1) TO last_char.
IF fld_chk(1) = '"' AND
last_char = '"'.
MOVE space TO fld_chk+fld_lth(1).
SHIFT fld_chk.
MOVE fld_chk TO <field_1>.
ENDIF. " for if fld_chk(1)=" & last_char="
ENDIF. " for if not fld_chk is initial
* Replace "" with "
DO.
IF fld_chk CS '""'.
quote_1 = sy-fdpos.
quote_2 = sy-fdpos + 1.
MOVE fld_chk+quote_2 TO fld_chk+quote_1.
ELSE.
MOVE fld_chk TO <field_1>.
EXIT.
ENDIF.
ENDDO.
<field> = <field_1>.
ELSE.
field_end = 1.
ENDIF.
SHIFT data_tab-data LEFT BY field_end PLACES.
ENDDO.
APPEND vendor_file_x.
CLEAR vendor_file_x.
ENDLOOP.
ENDIF.
CLEAR data_tab.
REFRESH data_tab.
FREE data_tab.
Rob -
Why does Read from Text file default to array of 9 elements
I am writing to a text file starting with a type def. cluster (control) of say 15 dbl numeric elements, that works fine I open the tab-delimited text file and all of the elements appear in the file. However when I read from the same text file back to the same type def. cluster (indicator), the read from text file defaults to 9 elements?? Is there a way to control how many elements are read from the file. This all works great when I initially use a cluster of 9 elements and read back to a cluster of 9 elements.
Solved!
Go to Solution.From the LabVIEW Help: http://zone.ni.com/reference/en-XX/help/371361G-01/glang/array_to_cluster/
Converts a 1D array to a cluster of elements of the same type as the array elements. Right-click the function and select Cluster Size from the shortcut menu to set the number of elements in the cluster.
The default is nine. The maximum cluster size for this function is 256.
Aside: so, how many times has this question been asked over the years? -
Read from text file vi won't read file...
I am very new to LV programming so I hope you forgive any stupid mistakes I am making. I am using Ver. 8.2 on an XP machine.
I have a small program that stores small data sets in text files and can update them individually or read and update them all sequentially, sending the data out a USB device. Currently I am just using two data sets, each in their own small text file. The delimiter is two commas ",,".
The program works fine as written when run in the regular programming environment. I noticed, however, as soon as I built it into a project that the one function where it would read each file sequentially to update both files the read from text file vi would return an empty data set, resulting in blank values being written back into the file. I read and rewrite the values back to the text file to place the one updated field (price) in it'sproper place. Each small text file is identified and named with a 4 digit number "ID". I built it twce, and get the same result. I also built it into an installer and unfortunately the bug travelled into the installation as well.
Here is the overall program code in question:
Here is the reading and parsing subvi:
If you have any idea at all what could cause this I would really appreciate it!
Solved!
Go to Solution.Hi Kiauma,
Dennis beat me to it, but here goes my two cents:
First of all, it's great to see that you're using error handling - that should make troubleshooting a lot easier. By any chance, have you observed error 7 when you try to read your files and get an empty data set? (You've probably seen that error before - it means the file wasn't found)
If you're seeing that error, the issue probably has something to do with this:
Relative paths differ in an executable. This knowledge base document sums it up pretty well. To make matters more confusing, if you ever upgrade to LabVIEW 2009 the whole scheme changes. Also, because an installer contains the executable, building the installer will always yield the same results.
Lastly, instead of parsing each set of commas using the "match pattern" function, there's a function called "spreadsheet string to array" (also on the string palette) that does exactly what you're doing, except with one function:
I hope this is helpful...
Jim -
Urgent + Search a delimited file for more then one string
Hi All
Urgent please help I am trying to search a file for 2 different strings.
This is how the command prompt will look like
java match "-t|" -4 -f Harvey -10 -f Atlanta callbook.txtPlease help me with identifying how many search criterias have been entered on command prompt.
"-t|" - denotes a delimiter
-4 - denotes the field number
-f - case sensitive
ThanksHi
Thanks for your quick response and I apologise I will never put urgent on my posting again.
C:\>java match "-t|" -4 -f Harvey -10 -f Atlanta callbook.txt
This example lists records for Atlanta with first name Harvey.
I am able to get one search criteria but when I have 2 as above (Harvey and Atlanta) I get lost I do not know how to get these arguments from the commandline and still do a search correctly. My code at the moment is doint a hard coded search so I was trying to extract the arguments that's where I got stuck.
The main aim of my app is to search a delimited file for a search criteria passed through a command prompt.
This is what I have done so far.
import java.io.*;
import java.util.*;
public class assess1
static public void main(String args[])
int[] totals = new int[10];
try
BufferedReader inFile = new BufferedReader(new FileReader(args[0]) );
try
String line;
String StrSearch = "HARVEY";
int counter = 0;
boolean foundIt = false;
while((line = inFile.readLine()) != null)
// search for the string
String[] values = line.split("\\|");
for (String str : values) {
if (str.equals("HARVEY")) {
System.out.println(line);
System.out.println();
counter++;
if (counter == 0) {
System.out.println("String not found");
inFile.close();
catch(IOException e)
System.err.println("IO exception");
System.exit(1);
catch(NumberFormatException e)
System.err.println("Value " + e.getMessage() + "not numeric");
catch(FileNotFoundException e)
System.err.println( "Couldn't open " + e.getMessage() );
System.exit(1);
}
Maybe you are looking for
-
Help needed in step-by-step form - Please
For the past month I have been editing in 16:9. My new project needs to be in 4:3. Can someone give me a step-by-step way to get FCE4 to revert by to 4:3. I have gone to easy set-up and changed to DV and still any sequence sets up in 16:9.
-
[b]Java Stored Procedure generating ora-932 in Oracle 9i [/b]
Hi , I have written a Java Stored procedure. And I have written a wrapper function as bellow FUNCTION "GENERATE_AC_EXCEL_REPORT_FN" ( refCursorCall VARCHAR2, fromDate Date, ToDate Date, nsn VARCHAR2, period varchar2, site VARCHAR2, vendor VARCHAR2, s
-
Case Number:0214213994 I was advised that my serial number for CS6 Master Collection was for MAC...I have a PC. The agent didn't offer to change platform for me which I was since advised by the college that this is what he should have done. How can
-
Urgent Reverse Engineering Hyperion-Planning in ODI fails**
All, using the provided ODI RKM for Hyperion Planning and the step by step document, we have built the HYPN-Planning datastore and tried to reverse eng. on application ( called 'month'). it failed with msg: IndexError: index out of range: 1 at or
-
TS3899 The mail server "mobile.charter.mail" is not responding.
I can send mail, but cannot receive mail. I have been to Charter and AT&T, and neither can help. Have not accessed mail since 3/4/2014 from my iphone 5. I have deleted and added several times. Does not help. My son can access mail from his iPhone and