SQL*Loader characterset
Database: 10.2.0.1
sql> select * from nls_database_parameters t where t.parameter like '%CHARACTERSET%';
PARAMETER VALUE
==============================================
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
OS: RHEL5
$ export | grep LANG
declare -x LANG="en_US.UTF-8"
declare -x NLS_LANG="GERMAN_GERMANY.WE8ISO8859P1"
I have file in latin1 codepage.
When I create external table (organization external type oracle_loader) I have problems with loading the Ü symbols in this file to the database:
a;b;c;Ü;d <- row inserted
Ü;a;b;c;d <- row inserted
a;b;c;d;Ü <- row failed to insert
KUP-04021: field formatting error for 1 field
KUP-04101: record 1 rejected in file latin1.csv
When I explicitly create external table with "characterset WE8ISO8859P1" clause the latin1 file loads successfully.
When I load the UTF8 file using external table which was created without clause "characterset WE8ISO8859P1" the file is loads OK too.
I want to know the process - why the characterset conversion from latin1 file (WE8ISO8859P1) to UTF8 database for some reason fails?
Is there possible to import latin1 files and do not specify the codepage in the external table DDL (seems that NLS_LANG env variable does not affect SQL*Loader external table)?
From documentation:
Specifying the CHARACTERSET parameter tells SQL*Loader the character set of the input datafile. The default character set for all datafiles, if the CHARACTERSET parameter is not specified, is the session character set defined by the NLS_LANG parameter. Only character data (fields in the SQL*Loader datatypes CHAR, VARCHAR, VARCHARC, numeric EXTERNAL, and the datetime and interval datatypes) is affected by the character set of the datafile.
Edited by: lynx™ on 15.07.2010 7:46
Hi Andre,
this is how I teach my classes normally!
You need to be aware of some changes you make with NLS_TERRITORY, which is part of NLS_LANG:
--> if you set NLS_TERRITORY to anohter value then you implicitly change the settings for NLS_NUMERIC_CHARACTERS => , although it is the same with america and australia obviousely the first digit here is the decimal separator and the second one is the grand seperator, thisn can destroy all numeric values if it is set impropper !!!
NLS_DATE_FORMAT,
NLS_TIMESTAMP_FORMAT,
AND NLS_CURRENCY.
SYS @10gR2 SQL select * from v$nls_parameters
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
19 rows selected.
SYS @10gR2 SQL > alter session set NLS_TERRITORY=australia;
SYS @10gR2 SQL > select * from v$nls_parameters
2 ;
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AUSTRALIA
NLS_CURRENCY $
NLS_ISO_CURRENCY AUSTRALIA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD/MON/RR
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_SORT BINARY
NLS_TIME_FORMAT HH12:MI:SSXFF AM
NLS_TIMESTAMP_FORMAT DD/MON/RR HH12:MI:SSXFF AM
NLS_TIME_TZ_FORMAT HH12:MI:SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD/MON/RR HH12:MI:SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
Similar Messages
-
Using SQL*Loader to Load Russian and Chinese Characters
We are testing our new 11.2.0.1 database using Oracle Linux 6. We created the database using the AL32UTF8 NLS Character set. We have tried using sqlldr to insert a few records that contain Russian and Chinese characters as a test. We can not seem to get them into the database in the correct format. For example, we can see the correct characters in the file we are trying to load on the Linux server, but once we load them into a table in the database, some of the characters are not displayed correctly (using SQL*Developer to select them out).
We can set the values within a column on the table by inserting them into the table and then select them out and they are correect, so it appears the problem is not in the database, but in the way sqlldr inserts them. We have tried several settings on the Linux server to set the NLS_LANG environment to AMERICAN_AMERICA.AL32UTF8, AMERICAN_AMERICA.UTF8, etc. without success.
Can someone provide us with any guidance on this? Would really appreciate any advice as to what we are not getting here.
Thanks!!The characterset of the database does not change the language used in your input data file. The character set of the datafile can be set up by using the NLS_LANG parameter or by specifying a SQL*Loader CHARACTERSET parameter. I suggest to move this question to the appropriate forum: Export/Import/SQL Loader & External Tables for closer topic alignment.
-
How to Import data via SQL Loader with characterset UTF16 little endian?
Hello,
I'm importing data from text file into one of my table which contains blob column.
I've specified following in my control file.
-----Control file-------
LOAD DATA
CHARACTERSET UTF16
BYTEORDER LITTLE
INFILE './DataFiles/Data.txt'
BADFILE './Logs/Data.bad'
INTO TABLE temp_blob truncate
FIELDS TERMINATED BY " "
TRAILING NULLCOLS
(GROUP_BLOB,CODE)
Problem:
SQL Loader always importing data via big endian. Is there any method available using which we can convert these data to little endian?
ThanksA new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.
-
Loading "fixed length" text files in UTF8 with SQL*Loader
Hi!
We have a lot of files, we load with SQL*Loader into our database. All Datafiles have fixed length columns, so we use POSITION(pos1, pos2) in the ctl-file. Till now the files were in WE8ISO8859P1 and everything was fine.
Now the source-system generating the files changes to unicode and the files are in UTF8!
The SQL-Loader docu says "The start and end arguments to the POSITION parameter are interpreted in bytes, even if character-length semantics are in use in a datafile....."
As I see this now, there is no way to say "column A starts at "CHARACTER Position pos1" and ends at "Character Position pos2".
I tested with
load data
CHARACTERSET AL32UTF8
LENGTH SEMANTICS CHARACTER
replace ...
in the .ctl file, but when the first character with more than one byte encoding (for example ü ) is in the file, all positions of that record are mixed up.
Is there a way to load these files in UTF8 without changing the file-definition to a column-seperator?
Thanks for any hints - charlyI have not tested this but you should be able to achieve what you want by using LENGTH SEMANTICS CHARACTER and by specifying field lengths (e.g. CHAR(5)) instead of only their positions. You could still use the POSITION(*+n) syntax to skip any separator columns that contain only spaces or tabs.
If the above does not work, an alternative would be to convert all UTF8 files to UTF16 before loading so that they become fixed-width.
-- Sergiusz -
How to Load Arabic Data from flat file using SQL Loader ?
Hi All,
We need to load Arabic data from an xls file to Oracle database, Request you to provide a very good note/step to achieve the same.
Below are the database parameters used
NLS_CHARACTERSET AR8ISO8859P6
nls_language american
DB version:-10g release 2
OS: rhel 5
Thanks in advance,
SatishTry to save your XLS file into CSV format and set either NLS_LANG to the right value or use SQL*Loader control file parameter CHARACTERSET.
See http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_control_file.htm#i1005287 -
SQL Loader Multibyte character error
Hello,
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Database Characterset: WE8MSWIN1252
To load an utf8 File (UTF-8 Unicode text) i use option characterset UTF8 in controlfile.
All went fine until in textfile the column is filled up with 40 chars and
german umlaute included (ÜÖÄüöä...)
Loader stops (errors=0) and gives:
Record 146466: Rejected - Error on table 'TableName', column 'ColumnName'.
Multibyte character error
* use CHAR(40) and POSITION(start-end) for field description - no help
* modify column, from VC2(40 char) to VC2(50 char) - no help
* without characterset UTF8 option i got wrong characters for german umlaute - no help
* manual insert the data from this row with NO problems !
any hint or workaround?
Regards
MichaelHi Werner,
on my linux desktop:
$ file test.dat
test.dat: UTF-8 Unicode text, with very long lines
my colleague is working on a windows system.
On both systems exact the same error from SQL Loader.
Btw, try with different number of special characters (german umlaute and euro) and there is no chance to load without the error
when to many (?) special characters or data is long as column length and special characters included.
Regards
Michael -
SQL Loader Multibyte character error, LENGTH SEMANTICS CHARACTER
Hi,
startet SQL Loader Multibyte character error
{thread:id=2340726}
some mod locked the thread, why?
the solution for others:
add LENGTH SEMANTICS CHARACTER to the controlfile
LOAD DATA characterset UTF8 LENGTH SEMANTICS CHARACTER
TRUNCATE
INTO TABLE utf8file_to_we8mswin1252
ID CHAR(1)
, TEXT CHAR(40)
)Regards
MichaelHi Werner,
on my linux desktop:
$ file test.dat
test.dat: UTF-8 Unicode text, with very long lines
my colleague is working on a windows system.
On both systems exact the same error from SQL Loader.
Btw, try with different number of special characters (german umlaute and euro) and there is no chance to load without the error
when to many (?) special characters or data is long as column length and special characters included.
Regards
Michael -
SQL Loader and Error ORA-01847/ORA-01839
Hi,
While using the direct loading in SQL-LOADER when we get the ORA-01847/ORA-01839 all the other records are getting errorred out. It goes fine with the conventional loading.
Should I use some parameters or anything to make sure that all the other records are not rejected when we get the ORA-01847/ORA-01839 error while going with the DIRECT loading.
Thanks
JibinIn internet I found this short message:
“AL32UTF8 is a multi-byte characterset,that means some characters are stored in more than 1 character, that's true for these special characters.
If you have same table definitions in both databases you likely face error ORA-12899.
This metalink note discusses this problem, it's also applicable to sqlloader:
Import reports "ORA-12899: Value too large for column" when using BYTE semantic
Doc ID: Note:563893.1”
By metalink, I can see the Note linked to a one Oracle Internal Bug for Oracle 11g.....
I'm waiting you suggestion... thanks very much in advance.
Regards.
Giovanni -
SQL Loader and foreign characters in the data file problem
Hello,
I have run into an issue which I can't find an answer for. When I run SQL Loader, one of my control files is used to get file content (LOBFILE) and one of the fields in the data file has a path to that file. The control file looks like:
LOAD DATA
INFILE 'PLACE_HOLDER.dat'
INTO TABLE iceberg.rpt_document_core APPEND
FIELDS TERMINATED BY ','
doc_core_id "iceberg.seq_rpt_document_core.nextval",
-- created_date POSITION(1) date "yyyy-mm-dd:hh24:mi:ss",
created_date date "yyyy-mm-dd:hh24:mi:ss",
document_size,
hash,
body_format,
is_generic_doc,
is_legacy_doc,
external_filename FILLER char(275) ENCLOSED by '"',
body LOBFILE(external_filename) terminated by EOF
A sample data file looks like:
0,2012-10-22:10:09:35,21,BB51344DD2127002118E286A197ECD4A,text,N,N,"E:\tmp\misc_files\index_testers\foreign\شیمیایی.txt"
0,2012-10-22:10:09:35,17,CF85BE76B1E20704180534E19D363CF8,text,N,N,"E:\tmp\misc_files\index_testers\foreign\ลอบวางระเบิด.txt"
0,2012-10-22:10:09:35,23552,47DB382558D69F170227AA18179FD0F0,binary,N,N,"E:\tmp\misc_files\index_testers\foreign\leesburgis_á_ñ_é_í_ó_ú_¿_¡_ü_99.doc"
0,2012-10-22:10:09:35,17,83FCA0377445B60CE422DE8994900A79,binary,N,N,"E:\tmp\misc_files\index_testers\foreign\làm thế nào bạn làm ngày hôm nay"
The problem is that whan I run this, SQL Loader throws an error that it can't find the file. It appears that it can't interpret the foreign characters in a way that allows it to find that path. I have tried adding a CHARACTERSET (using AL32UTF8 or UTF8) value in the control file but that only has some success with Western languages, not the ones listed above. Also, there is no set of defined languages that could be found in the data file. It essentaially could be any language.
Does anyone know if there is a way to somehow get SQL Loader to "understand" the file system paths when a folder and/or file name could be in some other langauge?
Thanks for any thoughts - PeterThanks for the reply Harry. If I try to open the file in various text editors like Wordpad, Notepad, GVIM, andTextpad, they all display the foreign characters differently. Only Notepad comes close to displaying the characters properly. I have a C# app that will read the file and display the contents and it renders it fine. If you look at the directory of files in Windows Explorer, they all are displayed properly. So it seems things like .Net and Windows have some mechanism to understand the characters in order to render them properly. Other applications, again like Wordpad, do not know how to render them properly. It would seem that whatever SQL Loader is using to "read" the data files also is not rendering the characters properly which prevents it from finding the directory path to the file. If I add "CHARACTERSET AL32UTF8" in the control file, all is fine when dealing with Western langauges (ex, German, Spanish) but not for the Eastern languages (ex. Thai, Chinese). So .... telling SQL Loader to use a characterset seems to work, but not in all cases. The AL32UTF8 is the characterset that the Oracle database was created with. I have not had any luck if I try to set the CHARACTERSET to whatever the Thai character set is, for example. There problem there though is that even if that did work, I can't target specific lagauages because the data could come from anywhere. It's like I need some sort of global "super set" characterset to use. It seems like the CHARACTERSET is the right track to follow but I am not sure, and even if it is, is there a way to handle all languages.
Thanks - Peter -
SQL Loader is creating a log file of 0 (zero) bytes.
Hello!!
I am using SQL Loader to load data from a .txt file to a Oracle table.
Following is the control file:
LOAD DATA
CHARACTERSET UTF8
CONTINUEIF LAST != "|"
INTO TABLE product_review_dtl
FIELDS TERMINATED BY '||' TRAILING NULLCOLS
indiv_review_id INTEGER EXTERNAL,
pid INTEGER EXTERNAL,
merchant_review_id INTEGER EXTERNAL,
merchant_user_id CHAR "SUBSTR(:merchant_user_id,1,20)",
review_status_txt CHAR "SUBSTR(:review_status_txt,1,20)",
review_create_date DATE "YYYY-MM-DD",
helpful_votes_cnt INTEGER EXTERNAL,
not_helpful_votes_cnt INTEGER EXTERNAL,
review_source_txt CHAR "SUBSTR(:review_source_txt,1,30)",
overall_rating_num INTEGER EXTERNAL,
comment_txt CHAR(4000) "SUBSTR(:comment_txt,1,4000)",
nickname CHAR "SUBSTR(:nickname,1,30)",
headline_txt CHAR "SUBSTR(:headline_txt,1,100)",
confirmed_status_grp INTEGER EXTERNAL "TO_NUMBER(SUBSTR(TO_CHAR(:confirmed_status_grp),1,5))",
location_txt CHAR "SUBSTR(:location_txt,1,100)"
Some records are loaded. A log file is also created but it is empty. Can you help me find out why the log file is empty?user525235 wrote:
Hello Folks!!
I have 2 input files with different encoding (apparent in case of special characters).
File 1 loads successfully. For File 2 loader gives a memory fault while loading. Hence the log file is of 0 bytes. I still have no clue as to why is the loader giving a memory fault. It is not an OS level memory fault as analysed by the OS team. Please help!
Thanks in advance :)Unknown OS
Unknown database version
No details about what import command was used or the options specified
No samples / details of input files or their encoding
No details about exact error message of "memory fault"
No help is possible ;-)
Srini -
SQL*Loader problem - not efficient, parsing error for big xml files
Hi Experts,
First of all, I would like to store xml files in object relation way. Therefore I created a schema and a table for it (see above).
I wants to propagate it (by using generated xml files), hence I created a control file for sql loader (see above).
I have two problems for it.
1, It takes a lot of time. It means I can upload a ~80MB file in 2 hours and a half.
2, At bigger files, I got the following error messages (OCI-31011: XML parsing failed OCI-19202: Error occurred in XML processing LPX-00243: element attribute value must be enclosed in quotes). It is quite interesting because my xml file is generated and I could generated and uploaded the first and second half of the file.
Can you help me to solve these problems?
Thanks,
Adam
Control file
UNRECOVERABLE
LOAD DATA
CHARACTERSET UTF8
INFILE *
APPEND
INTO TABLE coll_xml_objrel
XMLTYPE(xml)
FIELDS
ident constant 2
,file_name filler char(100)
,xml LOBFILE (file_name) TERMINATED BY EOF
BEGINDATA
generated1000x10000.xml
Sql Loader command
sqlldr.exe username/password@//localhost:1521/SID control='loader.ctl' log='loadr.log' direct=true
Schema
<?xml version="1.0" encoding="UTF-8"?>
<schema targetNamespace="http://www.something.com/shema/simple_searches" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.something.com/shema/simple_searches">
<element name="searches" type="tns:searches_type"></element>
<element name="search" type="tns:search_type"></element>
<element name="results" type="tns:results_type"></element>
<element name="result" type="tns:result_type"></element>
<complexType name="searches_type">
<sequence>
<element ref="tns:search" maxOccurs="unbounded"></element>
</sequence>
</complexType>
<complexType name="search_type">
<sequence>
<element ref="tns:results"></element>
</sequence>
<attribute ref="tns:id" use="required"></attribute>
<attribute ref="tns:type" use="required"></attribute>
</complexType>
<complexType name="results_type">
<sequence maxOccurs="unbounded">
<element ref="tns:result"></element>
</sequence>
</complexType>
<complexType name="result_type">
<attribute ref="tns:id" use="required"></attribute>
</complexType>
<simpleType name="type_type">
<restriction base="string">
<enumeration value="value1"></enumeration>
<enumeration value="value2"></enumeration>
</restriction>
</simpleType>
<attribute name="type" type="tns:type_type"></attribute>
<attribute name="id" type="string"></attribute>
</schema>
Create table
create table coll_xml_objrel
ident Number(20) primary key,
xml xmltype)
Xmltype column xml
store as object relational
xmlschema "http://www.something.com/schema/simple_searches.xsd"
Element "searches";Hi Odie_63,
Thanks for your answer.
I will post this question in the XML DB forum too (edit: I realized that you have done it. Thanks for it).
1, Version: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
2, see above
3, I have registered my schema with using dbms_xmlschema.registerSchema function.
Cheers,
Adam
XML generator:
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import javax.xml.stream.XMLOutputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamWriter;
public class mainGenerator {
public static void main(String[] args) throws FileNotFoundException, XMLStreamException {
// TODO Auto-generated method stub
final long numberOfSearches = 500;
final long numberOfResults = 10000;
XMLOutputFactory xof = XMLOutputFactory.newFactory();
XMLStreamWriter writer = xof.createXMLStreamWriter(new FileOutputStream("C:\\Working\\generated500x10000.xml"));
writer.writeStartDocument();
writer.writeStartElement("tns","searches", "http://www.something.com/schema/simple_searches");
writer.writeNamespace("tns", "http://www.something.com/schema/simple_searches");
for (long i = 0; i < numberOfSearches; i++){
Long help = new Long(i);
writer.writeStartElement("tns","search", "http://www.something.com/schema/simple_searches);
writer.writeAttribute("tns", "http://www.something.com/schema/simple_searches", "type", "value1");
writer.writeAttribute("tns", "http://www.something.com/schema/simple_searches", "id", help.toString());
writer.writeStartElement("tns","results", "http://www.something.com/schema/simple_searches");
for (long j = 0; j < numberOfResults; j++){
writer.writeStartElement("tns","result", "http://www.something.com/schema/simple_searches");
Long helper = new Long(i*numberOfResults+j);
writer.writeAttribute("tns", "http://www.something.com/schema/simple_searches", "id", helper.toString());
writer.writeEndElement();
writer.writeEndElement();
writer.writeEndElement();
writer.writeEndElement();
writer.writeEndDocument();
writer.close();
registerSchema:
begin
dbms_xmlschema.registerSchema(
'http://www.something.com/schema/simple_searches',
'<?xml version="1.0" encoding="UTF-8"?>
<schema targetNamespace="http://www.something.com/schema/simple_searches" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.something.com/schema/simple_searches">
<element name="searches" type="tns:searches_type"></element>
<element name="search" type="tns:search_type"></element>
<element name="results" type="tns:results_type"></element>
<element name="result" type="tns:result_type"></element>
<complexType name="searches_type">
<sequence>
<element ref="tns:search" maxOccurs="unbounded"></element>
</sequence>
</complexType>
<complexType name="search_type">
<sequence>
<element ref="tns:results"></element>
</sequence>
<attribute ref="tns:id" use="required"></attribute>
<attribute ref="tns:type" use="required"></attribute>
</complexType>
<complexType name="results_type">
<sequence maxOccurs="unbounded">
<element ref="tns:result"></element>
</sequence>
</complexType>
<complexType name="result_type">
<attribute ref="tns:id" use="required"></attribute>
</complexType>
<simpleType name="type_type">
<restriction base="string">
<enumeration value="value1"></enumeration>
<enumeration value="value2"></enumeration>
</restriction>
</simpleType>
<attribute name="type" type="tns:type_type"></attribute>
<attribute name="id" type="string"></attribute>
</schema>',
TRUE, TRUE, FALSE, FALSE);
end -
How to load the international characters by using the SQL*Loader(UNIX)?
Hi Everyone,
I am not able to load the international characters thru SQL*Loader which is calling from Unix. Whenever I load these characters , appears in DB such as Square box. Please help me how to resolve the issue.
Using version is:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
Thanks in advance.
Regards,
Vissu.....This may help
SQL> CREATE TABLE test_sqlldr_unicode (id INTEGER, name VARCHAR2(100 BYTE));
Table created.Now my data file.
1,"ABóCD"
2,"öXYZó"
3,"EFGÚHIJK"
4,"øøøøøøøøøøøøøøø"My control file.
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE 'C:\test_sqlldr_unicode.txt'
REPLACE
INTO TABLE test_sqlldr_unicode
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(id INTEGER EXTERNAL , name )
{code}
Running the sqlldr
{code}
C:\>sqlldr USERID=hr/hr CONTROL=test_sqlldr_unicode.ctl LOG=test_sqlldr_unicode.
log
SQL*Loader: Release 10.2.0.1.0 - Production on Thu Dec 30 19:38:22 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 5
C:\>
{code}
The table
{code}
SQL> SELECT * FROM test_sqlldr_unicode;
ID NAME
1 ABóCD
2 öXYZó
3 EFGÚHIJK
4 øøøøøøøøøøøøøøø
SQL>
{code} -
The output of sql loader displays '?' for the chinese character
I'm run the sql loader to import the data. The following is the ctl script:
LOAD DATA
CHARACTERSET ZHS16CGB231280
APPEND
INTO TABLE xxabc_gti_feedback
when (5:6) ='~~' and (8:9)='~~'
FIELDS TERMINATED BY "~~"
TRAILING NULLCOLS
absoluted
,exist_lists
,invoice_type
,invoice_category_code
,invoice_number
,line_count
,invoice_date DATE "YYYYMMDD"
,invoice_month
,trx_number
,amount_without_tax
,tax_rate
,tax_amount
,gti_feedback_id "xxban_gti_feedback_s.nextval"
,creation_date sysdate
,last_update_date sysdate
EBS version: R12(12.0.4)
The characterset of database is UTF-8
The file characterset is GB2312
The following is the example of data file.
SJJK0201~~已开发票传出
39~~20110413~~20110413
//发票1
0~~0~~0~~325676740~~11085979~~3~~20110413~~04~~~~3336.71~~0.17~~567.24~~珠海XX机电设备有限公司~~440407897878~~珠海市香洲区XXX 0756-3666666~~建行前山支行777777~~XX电子(苏州)有限公司~~32170078678890~~苏州工业园区 6267565~~中国银行园区支行25456672~~1653\n31263\n67126~~XXX~~XXX~~XXX
0~~aaa~~P83782~~个~~2~~6854.70~~0.17~~1165.30~~4010~~1~~1601
1~~bbb~~~~~~~~-4065.00~~0.17~~-691.05~~~~1~~1601
0~~ccc~~P80792~~个~~4~~547.01~~0.17~~92.99~~160~~1~~1601
I create a sql*loader concurrent program to load the data. the data can be loaded into the table successfully(The chinese customer name is correct). Only for the aborted lines, it will be listed in the outtput of the concurrent request, but out of my expect, the chinese characters in the output are displayed as '?'.
How to solve the issue? Thanks.
Edited by: 852938 on 2011-4-17 下午10:41like the following:
SJJK0201~~??????????
39~~20110413~~20110413
//???1
0~~aaa~~P83782~~??~~2~~6854.70~~0.17~~1165.30~~4010~~1~~1601
1~~bbb~~~~~~~~-4065.00~~0.17~~-691.05~~~~1~~1601
0~~ccc~~P80792~~??~~4~~547.01~~0.17~~92.99~~160~~1~~1601
You can find that any chinese characters are became '?'. The loaded line(Line 4th) is not listed in the output.
Thanks for your quick answer! :) -
Issue while loading a csv file using sql*loader...
Hi,
I am loading a csv file using sql*loader.
On the number columns where there is data populated in them, decimal number/integers, the row errors out on the error -
ORA-01722: invalid number
I tried checking the value picking from the excel,
and found the chr(13),chr(32),chr(10) values characters on the value.
ex: select length('0.21') from dual is giving a value of 7.
When i checked each character as
select ascii(substr('0.21',5,1) from dual is returning a value 9...etc.
I tried the following command....
"to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
to remove all the non-number special characters. But still facing the error.
Please let me know, any solution for this error.
Thanks in advance.
Kirancontrol file:
OPTIONS (ROWS=1, ERRORS=10000)
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE '$Xx_TOP/bin/ITEMS.csv'
APPEND INTO TABLE XXINF.ITEMS_STAGE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
ItemNum "trim(replace(replace(:ItemNum,chr(9),''),chr(13),''))",
cross_ref_old_item_num "trim(replace(replace(:cross_ref_old_item_num,chr(9),''),chr(13),''))",
Mas_description "trim(replace(replace(:Mas_description,chr(9),''),chr(13),''))",
Mas_long_description "trim(replace(replace(:Mas_long_description,chr(9),''),chr(13),''))",
Org_description "trim(replace(replace(:Org_description,chr(9),''),chr(13),''))",
Org_long_description "trim(replace(replace(:Org_long_description,chr(9),''),chr(13),''))",
user_item_type "trim(replace(replace(:user_item_type,chr(9),''),chr(13),''))",
organization_code "trim(replace(replace(:organization_code,chr(9),''),chr(13),''))",
primary_uom_code "trim(replace(replace(:primary_uom_code,chr(9),''),chr(13),''))",
inv_default_item_status "trim(replace(replace(:inv_default_item_status,chr(9),''),chr(13),''))",
inventory_item_flag "trim(replace(replace(:inventory_item_flag,chr(9),''),chr(13),''))",
stock_enabled_flag "trim(replace(replace(:stock_enabled_flag,chr(9),''),chr(13),''))",
mtl_transactions_enabled_flag "trim(replace(replace(:mtl_transactions_enabled_flag,chr(9),''),chr(13),''))",
revision_qty_control_code "trim(replace(replace(:revision_qty_control_code,chr(9),''),chr(13),''))",
reservable_type "trim(replace(replace(:reservable_type,chr(9),''),chr(13),''))",
check_shortages_flag "trim(replace(replace(:check_shortages_flag,chr(9),''),chr(13),''))",
shelf_life_code "trim(replace(replace(replace(replace(:shelf_life_code,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
shelf_life_days "trim(replace(replace(replace(replace(:shelf_life_days,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
lot_control_code "trim(replace(replace(:lot_control_code,chr(9),''),chr(13),''))",
auto_lot_alpha_prefix "trim(replace(replace(:auto_lot_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_lot_number "trim(replace(replace(:start_auto_lot_number,chr(9),''),chr(13),''))",
negative_measurement_error "trim(replace(replace(replace(replace(:negative_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
positive_measurement_error "trim(replace(replace(replace(replace(:positive_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
serial_number_control_code "trim(replace(replace(:serial_number_control_code,chr(9),''),chr(13),''))",
auto_serial_alpha_prefix "trim(replace(replace(:auto_serial_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_serial_number "trim(replace(replace(:start_auto_serial_number,chr(9),''),chr(13),''))",
location_control_code "trim(replace(replace(:location_control_code,chr(9),''),chr(13),''))",
restrict_subinventories_code "trim(replace(replace(:restrict_subinventories_code,chr(9),''),chr(13),''))",
restrict_locators_code "trim(replace(replace(:restrict_locators_code,chr(9),''),chr(13),''))",
bom_enabled_flag "trim(replace(replace(:bom_enabled_flag,chr(9),''),chr(13),''))",
costing_enabled_flag "trim(replace(replace(:costing_enabled_flag,chr(9),''),chr(13),''))",
inventory_asset_flag "trim(replace(replace(:inventory_asset_flag,chr(9),''),chr(13),''))",
default_include_in_rollup_flag "trim(replace(replace(:default_include_in_rollup_flag,chr(9),''),chr(13),''))",
cost_of_goods_sold_account "trim(replace(replace(:cost_of_goods_sold_account,chr(9),''),chr(13),''))",
std_lot_size "trim(replace(replace(replace(replace(:std_lot_size,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
sales_account "trim(replace(replace(:sales_account,chr(9),''),chr(13),''))",
purchasing_item_flag "trim(replace(replace(:purchasing_item_flag,chr(9),''),chr(13),''))",
purchasing_enabled_flag "trim(replace(replace(:purchasing_enabled_flag,chr(9),''),chr(13),''))",
must_use_approved_vendor_flag "trim(replace(replace(:must_use_approved_vendor_flag,chr(9),''),chr(13),''))",
allow_item_desc_update_flag "trim(replace(replace(:allow_item_desc_update_flag,chr(9),''),chr(13),''))",
rfq_required_flag "trim(replace(replace(:rfq_required_flag,chr(9),''),chr(13),''))",
buyer_name "trim(replace(replace(:buyer_name,chr(9),''),chr(13),''))",
list_price_per_unit "trim(replace(replace(replace(replace(:list_price_per_unit,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
taxable_flag "trim(replace(replace(:taxable_flag,chr(9),''),chr(13),''))",
purchasing_tax_code "trim(replace(replace(:purchasing_tax_code,chr(9),''),chr(13),''))",
receipt_required_flag "trim(replace(replace(:receipt_required_flag,chr(9),''),chr(13),''))",
inspection_required_flag "trim(replace(replace(:inspection_required_flag,chr(9),''),chr(13),''))",
price_tolerance_percent "trim(replace(replace(replace(replace(:price_tolerance_percent,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
expense_account "trim(replace(replace(:expense_account,chr(9),''),chr(13),''))",
allow_substitute_receipts_flag "trim(replace(replace(:allow_substitute_receipts_flag,chr(9),''),chr(13),''))",
allow_unordered_receipts_flag "trim(replace(replace(:allow_unordered_receipts_flag,chr(9),''),chr(13),''))",
receiving_routing_code "trim(replace(replace(:receiving_routing_code,chr(9),''),chr(13),''))",
inventory_planning_code "trim(replace(replace(:inventory_planning_code,chr(9),''),chr(13),''))",
min_minmax_quantity "trim(replace(replace(replace(replace(:min_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
max_minmax_quantity "trim(replace(replace(replace(replace(:max_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
planning_make_buy_code "trim(replace(replace(:planning_make_buy_code,chr(9),''),chr(13),''))",
source_type "trim(replace(replace(:source_type,chr(9),''),chr(13),''))",
mrp_safety_stock_code "trim(replace(replace(:mrp_safety_stock_code,chr(9),''),chr(13),''))",
material_cost "trim(replace(replace(:material_cost,chr(9),''),chr(13),''))",
mrp_planning_code "trim(replace(replace(:mrp_planning_code,chr(9),''),chr(13),''))",
customer_order_enabled_flag "trim(replace(replace(:customer_order_enabled_flag,chr(9),''),chr(13),''))",
customer_order_flag "trim(replace(replace(:customer_order_flag,chr(9),''),chr(13),''))",
shippable_item_flag "trim(replace(replace(:shippable_item_flag,chr(9),''),chr(13),''))",
internal_order_flag "trim(replace(replace(:internal_order_flag,chr(9),''),chr(13),''))",
internal_order_enabled_flag "trim(replace(replace(:internal_order_enabled_flag,chr(9),''),chr(13),''))",
invoice_enabled_flag "trim(replace(replace(:invoice_enabled_flag,chr(9),''),chr(13),''))",
invoiceable_item_flag "trim(replace(replace(:invoiceable_item_flag,chr(9),''),chr(13),''))",
cross_ref_ean_code "trim(replace(replace(:cross_ref_ean_code,chr(9),''),chr(13),''))",
category_set_intrastat "trim(replace(replace(:category_set_intrastat,chr(9),''),chr(13),''))",
CustomCode "trim(replace(replace(:CustomCode,chr(9),''),chr(13),''))",
net_weight "trim(replace(replace(replace(replace(:net_weight,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
production_speed "trim(replace(replace(:production_speed,chr(9),''),chr(13),''))",
LABEL "trim(replace(replace(:LABEL,chr(9),''),chr(13),''))",
comment1_org_level "trim(replace(replace(:comment1_org_level,chr(9),''),chr(13),''))",
comment2_org_level "trim(replace(replace(:comment2_org_level,chr(9),''),chr(13),''))",
std_cost_price_scala "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
supply_type "trim(replace(replace(:supply_type,chr(9),''),chr(13),''))",
subinventory_code "trim(replace(replace(:subinventory_code,chr(9),''),chr(13),''))",
preprocessing_lead_time "trim(replace(replace(replace(replace(:preprocessing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
processing_lead_time "trim(replace(replace(replace(replace(:processing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
wip_supply_locator "trim(replace(replace(:wip_supply_locator,chr(9),''),chr(13),''))"
Sample data from csv file.
"9901-0001-35","390000","JMKL16 Pipe bend 16 mm","","JMKL16 Putkikaari 16 mm","","AI","FJE","Ea","","","","","","","","","","","","","","","","","","","","","","","","","21-21100-22200-00000-00000-00-00000-00000","0","21-11100-22110-00000-00000-00-00000-00000","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.1","Pull","AFTER PROD","","","Locator for Production"
The load errors out on especially two columns :
1) std_cost_price_scala
2) list_price_per_unit
both are number columns.
And when there is data being provided on them. It errors out. But, if they are holding null values, the records go through fine.
Message was edited by:
KK28 -
SQL Loader unicode (umlaut) problem
Hi
I want to load some data with SQL Loader. The data contains german umlaut like ä, ö, ü.
The loading process works, but the umlaut are transformed to something like 'ü' in the DB. How can I get to load them correctly?
My environment:
- DB 10g Rel.2
- Windows XP
- Registry key in Ora_Home: NLS_LANG=GERMAN_GERMANY.WE8MSWIN1252
I tried it with setting the character set in the CTL file:
characterset 'WE8MSWIN1252'
That didn't help either.
Does anyone have an idea? I searched the forum but didn't find a solution.
Thanks for your help,
RogerMaybe a codepage issue ? See this example :
C:\tmp>type umlaut.ctl
load data
infile umlaut.dat
replace
into table umlaut_tab
(a)
C:\tmp>sqlldr test/test control=umlaut.ctl
SQL*Loader: Release 10.2.0.1.0 - Production on Tue Jun 10 13:19:50 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 2
Commit point reached - logical record count 3
C:\tmp>sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jun 10 13:19:56 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> select * from umlaut_tab;
A
õ
÷
³
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Produ
ction
C:\tmp>chcp 1252
Tabella codici attiva: 1252
C:\tmp>sqlplus test/test
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jun 10 13:20:19 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> select * from umlaut_tab;
A
ä
ö
ü
SQL>
Maybe you are looking for
-
Unable to delete a folder under Artists and albums...
Hi..Im not able to delete a folder under Artists and albums in the music player. When deleting im getting msg as "File is Corrupted. Operation Cancelled". Can somebody suggest me how to get rid of the folder??
-
Can't view videos using FCS client on Macbook Pro
I'm on a new Macbook Pro and I'm accessing FCS via the client. All seems fine but when I attempt to view an asset by clicking on the "video clip" icon nothing happens. Same issue when I open the asset and use the "Actions" drop down and select view.
-
I was using a prepaid American Express E-Gift card with my iTunes account and now the money is done. How do I remove the card from my account?
-
Can I get a transcript of a text chat?
Hi I'm new to the forum! I have a Blackberry Curve 9300 OS version v6.0.0.668, Platform 6.6.0.236. This is my question: I have a series of text messages between myself and a contact, which I need to use in a court transcript. Is there any way I can
-
Palm TX - Won't even hard reset
My Palm T/X will soft reset, but not hard reset. It keeps going to the password screen. When I try to enter the password, it jumps to the Owners screen and that's it - it's stuck. Needless to say, it won't Hotsync either. Any ideas on how to at lea