Parsing a csv file with carriage return replaced with #
Hi,
We have a weird problem. We are able to download a csv file using standard FM HTTP_GET. We want to parse the file and upload the data into our SAP CRM system. However, the file downloaded, has the carriage return replaced and the character # replaces it and everything seems like its one line.
I understand that the system replaces the Carriage return with the charater #. My question is, if I try to pass this file into my program to parse for the data, will there be any issues in the system recognizing that "#" that it is a carriage return and that the data in the file is not 1 record but multiple records?
Hi
'#' is what you see in the SAP. But the actuall ascii associated will be of carraige return itself. So to identify if you have multiple records of not don't use hard coded '#' but instead use the constant CL_ABAP_CHAR_UTILITIES=>CR_LF.
Regards
Ranganath
Similar Messages
-
Receiver File channel for XML files: with carriage return
Hi all,
we are using a receiver FILE channel to generate an XML file that is sent to an external partner.
The XML file looks good in a parser (IExplorer). But in fact there are not carriage return / line feeds between the XML tags
of the XML payload in the file.
Our partner now requires the XML file in a more vertical structure which means: for every tag a separate line (like it is displayed in a parser)
Does anybody know a more general way to convert to a vertical XML structure (so with carriage return line feed).
There is one entry in the SDN dealing with this topic but suggesting using an UDF. I think this is a very specific way.
I don't think it is a good way to change/enhance the message mapping just because of a general formating change.
Is it better to use an XSLT mapping as a second step in the interface mapping or a JAVA adapter module to convert ?
any experiences? suggestions? examples?
Thank you very much
best regards
Hans
examples:
original by XI receiver FILE adapter
<?xml version="1.0" encoding="UTF-8"?>
<MT_batchStatus><type>BS</type><header><message><messageSender>SENDER</messageSender><messageDate>20090723143720</messageDate> ... and so on
required:
<?xml version="1.0" encoding="UTF-8"?>
<MT_batchStatus>
<type>BS</type>
<header>
<message>
<messageSender>SENDER</messageSender>
<messageDate>20090723143720</messageDate>
... and so on>
Hans Georg Walter wrote:
> Is it better to use an XSLT mapping as a second step in the interface mapping or a JAVA adapter module to convert ?
> any experiences? suggestions? examples?
In such a case, the best is to write an generic XSLT or Java mapping that will attempt to do the pretty printing/formatting of the xml.
The advantage of a generic one is that you can reuse the same class/jar for many other scenarios.
so the flow will be as below in your interface mapping;
1. your specific source to target mapping
2. the generic formatting class -
Writing binary data to a file without carriage returns every 512 bytes
Is there a VI for writing binary data to a file without carriage returns being inserted every 512 bytes?
ThanksHi Momolxg,
I could be way off on this. I tried to simulate what you've done by
making a for loop that would run a set number of times. For my example I
used 1025. I wired the iteration terminal to a 'Write to SGL File.vi'
outside the loop with indexing enabled. It wrote the SGL data from 0 to
1024 to the file. I then read the file with a 'Read Characters from
File.vi' and searched the output for a carriage return (0D hex). It was
found five times. The reason why was the SGL number it was reading had a
13 (0D hex) in it. Perhaps you're running into a similar problem?
I tried it again, this time using the 'Write to I16 File.vi'. The
carriage return was found five times: the 28th character the first time
then on the 512th character four consecutive time
s after that. I suppose
that makes sense that you'd find a 0D in the numbers at equal spacings if
they're incrementing this way... In this case the carriage returns you're
seeing are actually numbers from your data.
One big difference is that I'm using a set pattern of numbers. This
doesn't appear to be your case. Is there a better way we can duplicate
your problem? It sounds interesting. Again my simulation could be way
off. (I'm also running this on LV60 for Linux so my results could be
different)
- Kevin
In article <[email protected]>,
"momolxg" wrote:
> Is there a VI for writing binary data to a file without carriage returns
> being inserted every 512 bytes? Thanks -
How to load data with carriage return through DRM action script ?
Hello,
We are using DRM to manage Essbase metadata. These metadata contain a field for member formula.
Currently it's a string data type property in DRM so we can't use carriage return and our formula are really hard to read.
But DRM support other data type property : memo or formatted memo where we can use carriage return.
Then in the export file, we have change the record delimiter to an other character than CRLF
Our issue : we are regularly using action script to load new metadata => How to load data properties with carriage return using action script ? There is no option to change the record delimiter.
Thanks!Hello Sandeep
here what I want to do through action script : loading a formula that use more on than one line:
Here, I write my formula using 4 lines but action script cannot load since one line = 1 record.
ChangeProp|Version_name|Hier_name|Node_name|Formula|@round(
qty*price
*m05*fy13 -
I would get the batch file with details of individual jobs as the output schema. The xml file looks alright. However when opened in notepad does not come with carriage return included.
I know that we can include child delimiter if it's an input flat file converted to xml schema, but no idea on output files.
Is there anyway we can include the Carriage return line feed in the schema so that when opened in notepad it's readable.Assuming you're talking about the output XML rather than the schema; if you open the XML in Visual Studio press the following keys
<crtl>KD and the XML will be formatted with indentation (hold the
<crtl> key down while pressing both the K and
D keys).
If you want a simple command line utility to format XML for you, the following should do the trick (pass in the name of the XML file):
static void Main(string[] args)
XmlDocument document = new XmlDocument();
document.Load(args[0]);
XmlTextWriter writer = new XmlTextWriter(args[0], Encoding.UTF8);
writer.Formatting = Formatting.Indented;
document.Save(writer);
writer.Flush();
writer.Close();
NOTE: this key sequence also formats source code.
David Downing... If this answers your question, please Mark as the Answer. If this post is helpful, please vote as helpful. -
hi all,
i am working on this app, in which i need a parse a CSV file every 1hr. now the CSV file is average size. i need to parse the file (i will use simple stringtokenizer), organise the data in the file (using simple string manipulation) and export to some format (will worry about later). now whats the most efficient and quick way to do this.
and what about the 1hr loop, how should i implement that. pls help.
thanks.ag2011 wrote:
hi all,
i am working on this app, in which i need a parse a CSV file every 1hr. now the CSV file is average size. i need to parse the file (i will use simple stringtokenizer), organise the data in the file (using simple string manipulation) and export to some format (will worry about later). now whats the most efficient and quick way to do this.
and what about the 1hr loop, how should i implement that. pls help.
thanks.Hi ,
Look at Quartz API ! This has very efficient job scheduling engine .
SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();
Scheduler sched = schedFact.getScheduler();
sched.start();
// create trigger
Trigger trigger = TriggerUtils.makeHourlyTrigger(1); // fire every one hour
JobDetail jobDetail = new JobDetail("myJob", "MyGrp", CSVParser.class); // This class is u r actual csv parser code exists
//schedule job
sched.scheduleJob(jobDetail, trigger);
i hope it helps ,
see Quartz API for more details !
http://www.opensymphony.com/quartz/
--------Amit
Edited by: AmitChalwade123456 on Jan 5, 2009 10:57 AM -
Are the older "Protected AAC files" that I have replaced with the unprotected version when I re-download them using Itunes Match?
Additionally, when trouble shooting, and you disable iTunes Match on an iOS device you cannot simply re-enable it immediately. You must launch the Music app and let the content clear out before re-enabling the service.
-
Parsing BLOB (CSV file with special characters) into table
Hello everyone,
In my application, user uploads a CSV file (it is stored as BLOB), which is later read and parsed into table. The parsing engine is shown bellow...
The problem is, that it won't read national characters as Ö, Ü etc., they simply dissapear.
Is there any CSV parser that supports national characters? Or, said in other words - is it possible to read BLOB by characters (where characters can be Ö, Ü etc.)?
Regards,
Adam
|
| helper function for csv parsing
|
+-----------------------------------------------*/
FUNCTION hex_to_decimal(p_hex_str in varchar2) return number
--this function is based on one by Connor McDonald
--http://www.jlcomp.demon.co.uk/faq/base_convert.html
is
v_dec number;
v_hex varchar2(16) := '0123456789ABCDEF';
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str) loop
v_dec := v_dec * 16 + instr(v_hex, upper(substr(p_hex_str, indx, 1))) - 1;
end loop;
return v_dec;
end hex_to_decimal;
|
| csv parsing
|
+-----------------------------------------------*/
FUNCTION parse_csv_to_imp_table(in_import_id in number) RETURN boolean IS
PRAGMA autonomous_transaction;
v_blob_data BLOB;
n_blob_len NUMBER;
v_entity_name VARCHAR2(100);
n_skip_rows INTEGER;
n_columns INTEGER;
n_col INTEGER := 0;
n_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2(32767) := NULL;
n_rows number := 0;
n_temp number;
BEGIN
-- shortened
n_blob_len := dbms_lob.getlength(v_blob_data);
n_position := 1;
-- Read and convert binary to char
WHILE (n_position <= n_blob_len) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data, c_chunk_len, n_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
n_temp := ascii(v_char);
n_position := n_position + c_chunk_len;
-- When a whole line is retrieved
IF v_char = CHR(10) THEN
n_rows := n_rows + 1;
if n_rows > n_skip_rows then
-- Shortened
-- Perform some action with the line (store into table etc.)
end if;
-- Clear out
v_line := NULL;
n_col := 0;
ELSIF v_char != chr(10) and v_char != chr(13) THEN
v_line := v_line || v_char;
if v_char = ';' then
n_col := n_col+1;
end if;
END IF;
END LOOP;
COMMIT;
return true;
EXCEPTION
-- some exception handling
END;Uploading CSV files into LOB columns and then reading them in PL/SQL: [It’s|http://forums.oracle.com/forums/thread.jspa?messageID=3454184�] Re: Reading a Blob (CSV file) and displaying the contents Re: Associative Array and Blob Number of rows in a clob doncha know.
Anyway, it woudl help if you gave us some basic information: database version and NLS settings would seem particularly relevant here.
Cheers, APC
blog: http://radiofreetooting.blogspot.com -
Help needed with Carriage Return in Unicode system
Hi Experts,
we have an issue with one of our programs regarding the carriage return that is used when creating a csv file. This issue has only came about because of the recent upgrade to ECC6.
The main change to the program was the redefinition of the line feed variable.
Existing statement was --> DATA: v_lf TYPE x VALUE '0D'.
This was then replaced with
New statement --> DATA: v_lf type c VALUE cl_abap_char_utilities=>cr_lf.
Since this change has been implemented, the csv file that is created and sent on as an attachment via the function SO_NEW_DOCUMENT_ATT_SEND_API1 is now not displayed correctly.
When looking at the attachment in trn SOST, a message is also displayed saying the file is not in a recognizable format when selecting the file to view.
The hex value for cl_abap_char_utilities=>cr_lf is 000D000A and this seems to be causing the issue.
I've also tried using Function FI_DME_CHARATERS as I only really want to use the CR. However, the hex value for this is 000D and not 0D.
Is there any way of converting this to have a hex value of 0D?
Thanks in advance,
ChrisHI,
Sorry it is not the CONTENTS_TXT it is CONTENTS_BIN. Instead of CONTENTS_BIN you need to pass to CONTENTS_HEX
Check my Code..i have used the CONTENTS_HEX instead of CONTENTS_BIN
DATA: l_tab_lines TYPE i,
l_string TYPE char300,
l_line TYPE string.
CONSTANTS : l_c_255(255) TYPE c VALUE '255',
l_c_txt(3) TYPE c VALUE 'TXT'.
DATA: lt_reclist TYPE STANDARD TABLE OF somlreci1, "Recipients
lt_objpack TYPE STANDARD TABLE OF sopcklsti1,
lt_objhead TYPE STANDARD TABLE OF solisti1,
lt_objtxt TYPE STANDARD TABLE OF solisti1, "Body of EMail
lt_objbin TYPE STANDARD TABLE OF solisti1."Attachment of EMail
DATA: l_doc_chng TYPE sodocchgi1, "attributes of document to send
l_reclist LIKE LINE OF lt_reclist,
l_objpack LIKE LINE OF lt_objpack,
l_obj LIKE LINE OF lt_objhead.
DATA :
l_hex LIKE solix,
lt_contents_hex LIKE STANDARD TABLE OF solix ,
conv TYPE REF TO cl_abap_conv_out_ce,
l_buffer TYPE xstring,
l_hexa(510) TYPE x.
* Completing the recipient list
l_reclist-receiver = p_emailid.
l_reclist-express = 'X'.
l_reclist-rec_type = 'U'.
APPEND l_reclist TO lt_reclist.
CLEAR l_reclist.
* Body of Email Message
APPEND l_obj TO lt_objtxt. CLEAR l_obj. " Blank line
l_obj-line = '<html>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<body>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<p><code>Hello,</p></code>'(t04).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = cl_abap_char_utilities=>newline.
APPEND l_obj TO lt_objtxt. CLEAR l_obj. " Blank line
CONCATENATE
'<p><code>'(f01)
'Please click the link to access the Confirmation Form.'(t01)
'Kindly complete the same and return it to the HR contact 15 days' &
' prior to the date of confirmation of the employee.'(t02)
'</p></code>'(f02) INTO l_obj-line SEPARATED BY space.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = cl_abap_char_utilities=>newline.
APPEND l_obj TO lt_objtxt. CLEAR l_obj. " Blank line
CONCATENATE '<a href="' text-l01 text-l02 '">'
INTO l_obj-line.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
CONCATENATE '<p><code>'(f01)
'Link to Confirmation Forms'(034)
'</p></code>'(f02)
'</a>'
INTO l_obj-line.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = cl_abap_char_utilities=>newline.
APPEND l_obj TO lt_objtxt. CLEAR l_obj. " Blank line
l_obj-line = '<p><code>The details of the employees ' &
'are as follows:</p></code>'(017).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = cl_abap_char_utilities=>newline.
APPEND l_obj TO lt_objtxt. CLEAR l_obj. " Blank line
* Table headings in the Mail Body
l_obj-line = '<table border="1">'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<tr>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<th><p><code>Name</p></code></th>'(030).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<th><p><code>Department</p></code></th>'(031).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<th><p><code>DOJ</p></code></th>'(032).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '<th><p><code>Confirmation Date</p></code></th>'(033).
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '</tr>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
** Body of Email Message
* Email Attachment
* Append headings
CONCATENATE 'Employee No.'(002)
'Employee Group'(016)
'Employee Name'(001)
'Designation'(003)
'Joining Date'(004)
'Department'(005)
'Branch/Location'(006)
'Unit'(007)
'Confirmation Due'(008)
'Form sent on'(009)
'Form Return by'(010)
'Employee Group'(016)
'Qualification'(012)
'Trainee Category'(013)
INTO l_string
SEPARATED BY cl_abap_char_utilities=>horizontal_tab.
APPEND l_string TO lt_objbin.
CLEAR l_string.
APPEND INITIAL LINE TO lt_objbin.
LOOP AT p_emp_details INTO i_emp_details_line.
CONCATENATE i_emp_details_line-pnalt
i_emp_details_line-egroup
i_emp_details_line-ename
i_emp_details_line-stext
i_emp_details_line-srvdt
i_emp_details_line-ltext
i_emp_details_line-pbtxt
i_emp_details_line-btrtx
i_emp_details_line-mndat
i_emp_details_line-fsdate
i_emp_details_line-frdate
i_emp_details_line-egroup
i_emp_details_line-ptext
i_emp_details_line-ftext
INTO l_string
SEPARATED BY cl_abap_char_utilities=>horizontal_tab.
APPEND l_string TO lt_objbin.
CLEAR l_string.
l_obj-line = '<tr>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
CONCATENATE '<td>' '<p><code>'(f01)
i_emp_details_line-ename '</p></code>'(f02) '</td>'
INTO l_string.
APPEND l_string TO lt_objtxt. CLEAR l_string.
CONCATENATE '<td>' '<p><code>'(f01)
i_emp_details_line-ltext '</p></code>'(f02) '</td>'
INTO l_string.
APPEND l_string TO lt_objtxt. CLEAR l_string.
CONCATENATE '<td>' '<p><code>'(f01)
i_emp_details_line-srvdt '</p></code>'(f02) '</td>'
INTO l_string.
APPEND l_string TO lt_objtxt. CLEAR l_string.
CONCATENATE '<td>' '<p><code>'(f01)
i_emp_details_line-mndat '</p></code>'(f02) '</td>'
INTO l_string.
APPEND l_string TO lt_objtxt. CLEAR l_string.
l_obj-line = '</tr>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
ENDLOOP.
l_obj-line = ' </table>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '</body>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
l_obj-line = '</html>'.
APPEND l_obj TO lt_objtxt. CLEAR l_obj.
IF r_cprob EQ 'X'.
l_string = 'Completion of probation List'(018).
ELSE.
l_string = 'Completion of extension of probation List'(035).
ENDIF.
APPEND l_string TO lt_objhead.
* APPEND object_header.
CALL FUNCTION 'SO_RAW_TO_RTF'
TABLES
objcont_old = lt_objbin
objcont_new = lt_objbin.
LOOP AT lt_objbin INTO l_line.
conv = cl_abap_conv_out_ce=>create( encoding = 'UTF-8' endian = 'B').
CALL METHOD conv->write( data = l_line ).
l_buffer = conv->get_buffer( ).
MOVE l_buffer TO l_hexa.
MOVE l_hexa TO l_hex-line.
APPEND l_hex TO lt_contents_hex.
ENDLOOP.
* File name for attachment
CONCATENATE 'Completion of probation List'(018)
sy-datum '.XLS' INTO l_obj SEPARATED BY space.
APPEND l_obj TO lt_objhead.
* Email Body Details
CLEAR l_tab_lines.
DESCRIBE TABLE lt_objtxt LINES l_tab_lines.
l_doc_chng-doc_size = ( l_tab_lines - 1 ) * 255 +
STRLEN( l_string ). "size of doc in bytes
l_doc_chng-obj_name = sy-repid.
l_doc_chng-obj_langu = sy-langu.
l_doc_chng-obj_descr = l_string.
l_doc_chng-sensitivty = 'P'. " Send mail as a confidential
l_objpack-head_start = 1.
l_objpack-head_num = 1.
l_objpack-body_start = 1.
l_objpack-body_num = l_tab_lines.
l_objpack-doc_type = 'HTML'. "l_c_txt.
APPEND l_objpack TO lt_objpack.
CLEAR l_objpack.
* Email Attachment Details
CLEAR l_tab_lines.
DESCRIBE TABLE lt_objbin LINES l_tab_lines.
* Creation of the entry for the compressed attachment
l_objpack-transf_bin = 'X'.
l_objpack-head_start = 1.
l_objpack-head_num = 1.
l_objpack-body_start = 1.
l_objpack-doc_type = 'XLS'.
l_objpack-obj_name = l_obj.
l_objpack-obj_descr = l_obj.
l_objpack-body_num = l_tab_lines.
l_objpack-doc_size = l_tab_lines * l_c_255.
APPEND l_objpack TO lt_objpack.
CLEAR l_objpack.
* Send the document
CALL FUNCTION 'SO_DOCUMENT_SEND_API1'
EXPORTING
document_data = l_doc_chng
put_in_outbox = ' '
commit_work = 'X'
TABLES
packing_list = lt_objpack
object_header = lt_objhead
* contents_bin = lt_objbin
contents_txt = lt_objtxt
contents_hex = lt_contents_hex
receivers = lt_reclist
EXCEPTIONS
too_many_receivers = 1
document_not_sent = 2
document_type_not_exist = 3
operation_no_authorization = 4
parameter_error = 5
x_error = 6
enqueue_error = 7
OTHERS = 8. -
Newline/carriage return replacement
I have an output from a database that has one column with embedded newlines or return carriages as well as a newline at the end of the record. When i export the data to a text file, the embedded newline/returns seem to contain those characters. I have been trying to use sed to replace the newlines but it does not work. I have used replacing the return carriage and that works but it seems to keep a character for either newline or carriage return. Is there a way to remove newlines inside a string while keeping the legitimate newline for the end of the record using either sed,awk or perl or any other tool. I am considering C but i may be overkill. I tried perl but since i am not versed in it i have been having a difficult time with it.
This is what i have been using: sed 's/\r//g' infile > outfile
ThanksTry tr -d '\r'
-
File sender - carriage return and new line as endseparator
Hi All,
I'm stumbled with a problem to set carriage return and newline character as the endseparator (record delimiter)
Scenario: The input file PI is receiving has got few fields which has text in it and the content in this text has got newline characters. So to identify record separator, at the end of each record the combination of carriage return and new line is added.
In file sender content conversion I've set the folllowing parameters to achieve this:
Row.fieldSeparator : , (comma)
Row.endSeparator : '0x0D''0x0A'
Problem: In my sample input file, the first row has a text field with a newline character. When executed PI is splitting the record at the newline character in the text field (instead of combination of carriagereturn and newline which is set in the endSeparator paramter).
Pls advice on how to resolve this .
Happy Holidays!!
amarRajesh..Thanks for your quick response
I'm trying with the actual file being provided by the trading partner. I did not generate the file nor changed the file..
Before identifying the combination of carraige return and new line, the adapter is splitting the record when encountered with new line character within one of the text fields.
As you said it worked for you, you do have new line characters within fields in a record?..pls throw some light..let me know for any information..
Thank you
amar-- -
Problem with SQL*Loader loading long description with carriage return
I'm trying to load new items into mtl_system_items_interface via a concurrent
program running the SQL*Loader. The load is erroring due to not finding a
delimeter - I'm guessing it's having problems with the long_description.
Here's my ctl file:
LOAD
INFILE 'create_prober_items.csv'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "TRIM(:LONG_DESCRIPTION)")
Here's a sample record from the csv file:
1,1,CREATE,0,546,03,B00-100289,PROBEHEAD PH100 COMPLETE/ VACUUM/COAX ,"- Linear
X axis, Y,Z pivots
- Movement range: X: 8mm, Y: 6mm, Z: 25mm
- Probe tip pressure adjustable contact
- Vacuum adapter
- With shielded arm
- Incl. separate miniature female HF plug
The long_description has to appear as:
- something
- something
It can't appear as:
-something-something
Here's the errors:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Record 2: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
ORGANIZATION_ID.
Column not found before end of logical record (use TRAILING NULLCOLS)
I've asked for help on the Metalink forum and was advised to add trailing nullcols to the ctl so the ctl line now looks like:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
I don't think this was right because now I'm getting:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Thanks for any help that may be offered.
-TracyLOAD
INFILE 'create_prober_items.csv'
CONTINUEIF LAST <> '"'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "REPLACE (TRIM(:LONG_DESCRIPTION), '-', CHR(10) || '-')") -
Reading/Parsing a CSV file in UTF-16 ?
Hello everyone,
I'm in rush to modify my current CSV file parser that works fine for files in UTF-8 , to be able to parse the UTF-16 as well, as far as I checked the sample plugins, didn't find any code,
Also how could have support for both encodings? to do this I need to recognize the encoding by reading the file first then decide how to read from stream, any advice/ snippet will greatly appreciated.
P.S. I'm using this code to read a file
stream = StreamUtils::CreateFileStreamRead()
stream ->XferByte(aChar) // in a loop till find a eol char
I need to use to read the 2 bytes, i had some experiment with XferInt16 but seems it doesn't do what i want...
Regards,
KamranI had forgotten to skip the first two bytes in this case, Now I can read the file properly with XferInt16, also you may consider Byte Swapping for BigEndian in parsing process.
-Kamran -
Here is the transcript of a chat with Firefox community member zzxcon May 3/10
You are now chatting with Firefox community member zzxc
zzxc: Hello
zzxc: What happens when you attempt to download a .csv file?
seegal: hello
seegal: it doesn't copy
zzxc: how are you trying to copy?
seegal: pls bear with me I'm a slow typist. Just copy selected text
Biolizard has joined the conversation.
zzxc: ok - which text are you selecting?
seegal: I reconcile my checkbook (spreadsheet this way). I copy the items in my online bank acc and paste it to the spreadsheet
seegal: I'm using Firefox /2.0.0.19. Have no problem to do this.
zzxc: Which version of OS X?
seegal: In all newer version nothing happens when trying to paste- just doesn't paste
zzxc: Firefox 2.0.0.x is no longer supported, and hasn't been supported in over a year
zzxc: Paste into Excel, from Firefox?
seegal: Sorry, I'm ahead... /2.0.0.19
seegal: Yes. I open my bank acc in Firefox
zzxc: Which version of Excel?
zzxc: It would really help if you could tell me step by step what you're doing.
seegal: First re: your previous question: it's OS 10.4.11
seegal: About Excell: it'as 2004 version - the lasat one produced for Macs. The specific version is 11.3.7
seegal: So I open my bank acc online in Firfox (my primary browser). I copy the latest entry in the account and paste it into my Excel spreadsheet.
zzxc: so, you copy direct from the web page without downloading a CSV file?
seegal: What do you mean by downloading to CSV file? I could export from the https://chat-support.mozilla.com:9091/webchat/getimage?image=sendmessage&workgroup=support%40workgroup.chat-support.mozilla.comFirefox to the CSV file, but the other way around?
zzxc: Are you copying your bank statement directly from the web site to Excel using the clipboard?
seegal: sI don't use the clipboard. This is a mac. There is no need to do that. In PC it would be yes.
zzxc: I need to know the exact steps you're taking to get them into excel
zzxc: And I need to know what exactly goes wrong in the latest version of Firefox.
seegal: Do you have a mac there with Firefox and Excel? It would be very easy to reproduce. Imagine you open an online bank acc, select some entries , click "copy", than proceed to your already open Excel spreadsheet and click' paste". That's it!
zzxc: When this happens, do you get cryptic code pasted into Excel?
seegal: As I said before: in all newer versions starting with 3.0 when I go to Excel to "paste" from my bank acc nothing happens. It does not paste. No, I don't get a cryptic code pasted, just NOTHING.
zzxc: what if you paste into MS Word instead?
seegal: haven't tried that, the formatting most likely would be lost. Tried that with an other Exc el spreadsheet- it lost all the formating and pasted as continous text.
== This happened ==
Every time Firefox opened
== Pls. see copy of the chat above. THIS A MAC OS X. In older versions, prior to 3.0 I could copy from CSV file on the website Ibank acc) and paste directly to my Excel spreadheet to reconcile my account.See also:
Table2Clipboard: https://addons.mozilla.org/firefox/addon/1852
TableTools: https://addons.mozilla.org/firefox/addon/2637 -
Problem with "carriage Return" or "Line Feed" in a table
Hello,
I need help with the function Zeichen(), so called it in german, I'm not sure if it is char() in english.
In Pages Version 4.0.1 (746) I've created a table in Pages with this function to make an "carriage Return" in a cell.
Here an example: { ="Hello" & Zeichen(10) & "World" }, in the cell is now:
| Hello |
| World |
But in Pages Version 4.0.3 (766) this function ZEICHEN() not allow the number 10 and other till 32 (I allready read the manual and understand this problem).
Anybody an idea to make a "Carriage Return" or "Line feed" in a formular, in a cell, in a table, in Pages?
Thanks
Detlev Kormannkdetlev wrote:
Your workaround will not be possible in my table, because there is no empty cell in the table, but I think it is also a good help too.
*In fact I forgot that you are using a table in Pages.*
Remaining on my first idea, In Numbers we may achieve the same goal using an auxiliary table with a single cell containing the needed line break.
If this table is named "LineBreak",
the formula will be:
="Hello"&LineBreak :: $A$1&"World"
*In Pages, here is my workaround:*
Enter the cell
type a§b
don't type the character § but ctrl + return.
The cell will contain
a
b
with the arrow, move before the a
type =B2&"
delete the original a
move to the right
delete the original b
type "&C7
Yvan KOENIG (VALLAURIS, France) mercredi 7 octobre 2009 17:20:04
Maybe you are looking for
-
How can I get the underlying object from the ObjectReference
Dear friends, I think this question has been asked a couple of times. But, I am still wondering if anybody has found an answer to it. Maybe this is some common need ... I would like to get the underlying object for which the ObjectReference is a mirr
-
Hello, I am currently in training to be an ABAP programmer and have some questions regarding the GRC module. I would like to specialise and gain experience in this module but I feel as if one needs more business experience as opposed to ABAP program
-
dear fellows i m using form 6i and wish to set some custom key and help says about oracle terminal i could not find it any body plz help me how to find and use it thanx geeta
-
Directory permission checking view?
Hi All, Can any one let me know the view to check the permission on the directory or query itself will be of more help. weather the directory has the read,write permission in oracle 9i. Regards, Asif.
-
Please help with jce 1.2.1 encoding/decoding problem (urgent)
Hello, env: jce 1.2.1,java 1.3,webstart. Server (Sun),client webstart PBE encryption The encryption done on client side, but problems become on some machines encryption comes diferent, we can not find where the problem is (encoding or locale seting)