Error with SQL Loader 10g
Hi,
I use SQL Loader to load files into my database running on Suse Linux. The loader is invoked by a Java application.
In Oracle 9i my string would be something like
sqlldr DATA='opt/test information/sample.dat' PARFILE=/opt/test.par
It worked.
When I ran the same string in 10g I got an error like
" multiple values not allowed for parameter 'data' "
I tried escaping the ' with a \ , like
sqlldr DATA=\'opt/test information/sample.dat\' PARFILE=/opt/test.par
Now it gives me a different error
SQL*Loader-500: Unable to open file (opt/test information/302.DAT)
SQL*Loader-553: file not found
I cannot remove the space from the directory. It has to be "test information".
Can anybody help me out?
Thanks,
Karthik
or better yet, you are on Linux so you have the luxury of creating a symbolic link to that directory and make the sym link without spaces and then use that in your sqlldr command line.
you can keep the directory with spaces and have sqlldr run from the sym link.
Similar Messages
-
Sql@loader-704 and ORA-12154: error messages when trying to load data with SQL Loader
I have a data base with two tables that is used by Apex 4.2. One table has 800,000 records . The other has 7 million records
The client recently upgraded from Apex 3.2 to Apex 4.2 . We exported/imported the data to the new location with no problems
The source of the data is an old mainframe system; I needed to make changes to the source data and then load the tables.
The first time I loaded the data i did it from a command line with SQL loader
Now when I try to load the data I get this message:
sql@loader-704 Internal error: ulconnect OCISERVERATTACH
ORA-12154: tns:could not resolve the connect identifier specified
I've searched for postings on these error message and they all seem to say that SQL Ldr can't find my TNSNAMES file.
I am able to connect and load data with SQL Developer; so SQL developer is able to find the TNSNAMES file
However SQL Developer will not let me load a file this big
I have also tried to load the file within Apex (SQL Workshop/ Utilities) but again, the file is too big.
So it seems like SQL Loader is the only option
I did find one post online that said to set an environment variable with the path to the TNSNAMES file, but that didn't work..
Not sure what else to try or where to look
thanksHi,
You must have more than one tnsnames file or multiple installations of oracle. What i suggest you do (as I'm sure will be mentioned in ed's link that you were already pointed at) is the following (* i assume you are on windows?)
open a command prompt
set TNS_ADMIN=PATH_TO_DIRECTOT_THAT_CONTAINS_CORRECT_TNSNAMES_FILE (i.e. something like set TNS_ADMIN=c:\oracle\network\admin)
This will tell oracle use the config files you find here and no others
then try sqlldr user/pass@db (in the same dos window)
see if that connects and let us know.
Cheers,
Harry
http://dbaharrison.blogspot.com -
Need help with SQL*Loader not working
Hi all,
I am trying to run SQL*Loader on Oracle 10g UNIX platform (Red Hat Linux) with below command:
sqlldr userid='ldm/password' control=issue.ctl bad=issue.bad discard=issue.txt direct=true log=issue.log
And get below errors:
SQL*Loader-128: unable to begin a session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Can anyone help me out with this problem that I am having with SQL*Loader? Thanks!
Ben PrusinskiHi Frank,
More progress, I exported the ORACLE_SID and tried again but now have new errors! We are trying to load an Excel CSV file into a new table on our Oracle 10g database. I created the new table in Oracle and loaded with SQL*Loader with below problems.
$ export ORACLE_SID=PROD
$ sqlldr 'ldm/password@PROD' control=prod.ctl log=issue.log bad=bad.log discard=discard.log
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: prod.ctl
Data File: prod.csv
Bad File: bad.log
Discard File: discard.log
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table TESTLD, loaded from every logical record.
Insert option in effect for this table: REPLACE
Column Name Position Len Term Encl Datatype
ISSUE_KEY FIRST * , CHARACTER
TIME_DIM_KEY NEXT * , CHARACTER
PRODUCT_CATEGORY_KEY NEXT * , CHARACTER
PRODUCT_KEY NEXT * , CHARACTER
SALES_CHANNEL_DIM_KEY NEXT * , CHARACTER
TIME_OF_DAY_DIM_KEY NEXT * , CHARACTER
ACCOUNT_DIM_KEY NEXT * , CHARACTER
ESN_KEY NEXT * , CHARACTER
DISCOUNT_DIM_KEY NEXT * , CHARACTER
INVOICE_NUMBER NEXT * , CHARACTER
ISSUE_QTY NEXT * , CHARACTER
GROSS_PRICE NEXT * , CHARACTER
DISCOUNT_AMT NEXT * , CHARACTER
NET_PRICE NEXT * , CHARACTER
COST NEXT * , CHARACTER
SALES_GEOGRAPHY_DIM_KEY NEXT * , CHARACTER
value used for ROWS parameter changed from 64 to 62
Record 1: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 2: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 3: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 4: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 5: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 6: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 7: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 8: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 9: Rejected - Error on table ISSUE_FACT_TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 10: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 11: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 12: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 13: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 14: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 15: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 16: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 17: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 18: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 19: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 20: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 21: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 22: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 23: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 24: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 39: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table TESTLD:
0 Rows successfully loaded.
51 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 255936 bytes(62 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 51
Total logical records rejected: 51
Total logical records discarded: 0
Run began on Tue May 23 11:04:28 2006
Run ended on Tue May 23 11:04:28 2006
Elapsed time was: 00:00:00.14
CPU time was: 00:00:00.01
[oracle@casanbdb11 sql_loader]$
Here is the control file:
LOAD DATA
INFILE issue_fact.csv
REPLACE
INTO TABLE TESTLD
FIELDS TERMINATED BY ','
ISSUE_KEY,
TIME_DIM_KEY,
PRODUCT_CATEGORY_KEY,
PRODUCT_KEY,
SALES_CHANNEL_DIM_KEY,
TIME_OF_DAY_DIM_KEY,
ACCOUNT_DIM_KEY,
ESN_KEY,
DISCOUNT_DIM_KEY,
INVOICE_NUMBER,
ISSUE_QTY,
GROSS_PRICE,
DISCOUNT_AMT,
NET_PRICE,
COST,
SALES_GEOGRAPHY_DIM_KEY
) -
URGENT: Problems Loading files with SQL Loader into a BLOB column
Hi friends,
I read a lot about how to load files into blob columns, but I found errors that I can't solve.
I've read several notes in these forums, ine of them:
sql loader: loading external file into blob
and tried the solutions but without good results.
Here are some of my tests:
With this .ctl:
LOAD DATA
INFILE *
INTO TABLE mytable
REPLACE
FIELDS TERMINATED BY ','
number1 INTEGER EXTERNAL,
cad1 CHAR(250),
image1 LOBFILE(cad1) TERMINATED BY EOF
BEGINDATA
1153,/opt/oracle/appl/myapp/1.0.0/img/1153.JPG,
the error when I execute sqlldr is:
SQL*Loader-350: Syntax error at line 9.
Expecting "," or ")", found "LOBFILE".
image1 LOBFILE(cad1) TERMINATED BY EOF
^
What problem exists with LOBFILE ??
(mytable of course has number1 as a NUMBER, cad1 as VARCHAR2(250) and image1 as BLOB
I tried too with :
LOAD DATA
INFILE sample.dat
INTO TABLE mytable
FIELDS TERMINATED BY ','
(cad1 CHAR(3),
cad2 FILLER CHAR(30),
image1 BFILE(CONSTANT "/opt/oracle/appl/myapp/1.0.0/img/", cad2))
sample.dat is:
1153,1153.JPEG,
and error is:
SQL*Loader-350: Syntax error at line 6.
Expecting "," or ")", found "FILLER".
cad2 FILLER CHAR(30),
^
I tried too with a procedure, but without results...
Any idea about this error messages?
Thanks a lot.
Jose L.> So you think that if one person put an "urgent" in the subject is screwing the problems of
other people?
Absolutely. As you are telling them "My posting is more important than yours and deserve faster attention and resolution than yours!".
So what could a typical response be? Someone telling you that his posting is more important by using the phrase "VERY URGENT!". And the next poster may decide that, no, his problem is evern more import - and use "EXTREMELY URGENT!!" as the subject. And the next one then raises the stakes by claiming his problem is "CODE RED! CRITICAL. DEFCON 4. URGENT!!!!".
Stupid, isn't it? As stupid as your instance that there is nothing wrong with your pitiful clamoring for attention to your problem by saying it is urgent.
What does the RFC's say about a meaningful title/subject in a public forum? I trust that you know what a RFC is? After all, you claim to have used public forums on the Internet for some years now..
The RFC on "public forums" is called The Usenet Article Format. This is what it has to say about the SUBJECT of a public posting:
=
The "Subject" line (formerly "Title") tells what the message is about. It should be suggestive enough of the contents of the message to enable a reader to make a decision whether to read the message based on the subject alone. If the message is submitted in response to another message (e.g., is a follow-up) the default subject should begin with the four characters "Re: ", and the "References" line is required. For follow-ups, the use of the "Summary" line is encouraged.
=
([url http://www.cs.tut.fi/~jkorpela/rfc/1036.html]RFC 1036, the Usenet article format)
Or how about [url http://www.cs.tut.fi/~jkorpela/usenet/dont.html]The seven don'ts of Usenet?
Point 7 of the Don'ts:
Don't try to catch attention by typing something foolish like "PLEASE HELP ME!!!! URGENT!!! I NEED YOUR HELP!!!" into the Subject line. Instead, type something informative (using normal mixed case!) that describes the subject matter.
Please tell me that you are not too thick to understand the basic principles of netiquette, or to argue with the RFCs that governs the very fabric of the Internet.
As for when I have an "urgent" problem? In my "real" work? I take it up with Oracle Support on Metalink by filing an iTAR/SR. As any non-idiot should do with a real-life Oracle crisis problem.
I do not barge into a public forum like you do, jump up and down, and demand quick attention by claiming that my problem is more important and more urgent and more deserving of attention that other people's problem in the very same forum. -
Problem with loading file with SQL loader
i am getting a problem with loading a file with SQL loader. The loading is getting
terminated after around 2000 rows whereas there are around 2700000 rows in the file.
The file is like
919879086475,11/17/2004,11/20/2004
919879698625,11/17/2004,11/17/2004
919879698628,11/17/2004,11/17/2004
the control file, i am using is like:-
load data
infile 'c:\ran\temp\pps_fc.txt'
into table bm_05oct06
fields terminated by ","
(mobile_no, fcal, frdate )
I hope, my question is clear. Please help, in solving the doubt.
regards.So which thread is telling the truth?
Doubt with SQL loader file wih spaces
Are the fields delimited with spaces or with commas?
Perhaps they are a mixture of delimiters and that is where the error is coming in? -
Loading two tables at same time with SQL Loader
I have two tables I would like to populate from a file C:\my_data_file.txt.
Many of the columns I am loading into both tables but there are a handful of columns I do not want. The first column I do not want for either table. My problem is how I can direct SQL Loader to go back to the first column and skip over it. I had tried using POSITION(1) and FILLER for the first column while loading the second table but I got THE following error message:
SQL*Loader-350: Syntax error at line 65
Expecting "," or ")" found keyword Filler
col_a Poistion(1) FILLER INTEGER EXTERNALMy control file looks like the following:
LOAD DATA
INFILE 'C:\my_data_file.txt'
BADFILE 'C:\my_data_file.txt'
DISCARDFILE 'C:\my_data_file.txt'
TRUNCATE INTO TABLE table_one
WHEN (specific conditions)
FIELDS TERMINATED BY ' '
TRAILING NULLCOLS
col_a FILLER INTEGER EXTERNAL,
col_b INTEGER EXTERNAL,
col_g FILLER CHAR,
col_h CHAR,
col_date DATE "yyyy-mm-dd"
INTO TABLE table_two
WHEN (specific conditions)
FIELDS TERMINATED BY ' '
TRAILING NULLCOLS
col_a POSITION(1) FILLER INTEGER EXTERNAL,
col_b INTEGER EXTERNAL,
col_g FILLER CHAR,
col_h CHAR,
col_date DATE "yyyy-mm-dd"
)Try adapting this for your scenario.
tables for the test
create table test1 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
create table test2 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
control file
LOAD DATA
INFILE "test.txt"
INTO TABLE user.test1 TRUNCATE
WHEN RECID = '1'
FIELDS TERMINATED BY ' '
recid filler integer external,
fld1 char,
fld2 integer external,
fld3 char
INTO TABLE user.test2 TRUNCATE
WHEN RECID <> '1'
FIELDS TERMINATED BY ' '
recid filler position(1) integer external,
fld1 char,
fld2 integer external,
fld3 char
data for loading [text.txt]
1 AAAAA 11111 IIIII
2 BBBBB 22222 JJJJJ
1 CCCCC 33333 KKKKK
2 DDDDD 44444 LLLLL
1 EEEEE 55555 MMMMM
2 FFFFF 66666 NNNNN
1 GGGGG 77777 OOOOO
2 HHHHH 88888 PPPPP
HTH
RK -
How can I load data into table with SQL*LOADER
how can I load data into table with SQL*LOADER
when column data length more than 255 bytes?
when column exceed 255 ,data can not be insert into table by SQL*LOADER
CREATE TABLE A (
A VARCHAR2 ( 10 ) ,
B VARCHAR2 ( 10 ) ,
C VARCHAR2 ( 10 ) ,
E VARCHAR2 ( 2000 ) );
control file:
load data
append into table A
fields terminated by X'09'
(A , B , C , E )
SQL*LOADER command:
sqlldr test/test control=A_ctl.txt data=A.xls log=b.log
datafile:
column E is more than 255bytes
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)
1 1 1 1234567------(more than 255bytes)Check this out.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1006961 -
Loading Objects with SQL*Loader
When loading a column object with SQL*Loader, is it possible to specify a column specific 'TERMINATED BY' clause in the control file?
I've successfully defined column-level termination characters for regular columns and nested tables, but can't seem to find any way of achieving the same result with column objects.When loading a column object with SQL*Loader, is it possible to specify a column specific 'TERMINATED BY' clause in the control file?
I've successfully defined column-level termination characters for regular columns and nested tables, but can't seem to find any way of achieving the same result with column objects. -
Loading "fixed length" text files in UTF8 with SQL*Loader
Hi!
We have a lot of files, we load with SQL*Loader into our database. All Datafiles have fixed length columns, so we use POSITION(pos1, pos2) in the ctl-file. Till now the files were in WE8ISO8859P1 and everything was fine.
Now the source-system generating the files changes to unicode and the files are in UTF8!
The SQL-Loader docu says "The start and end arguments to the POSITION parameter are interpreted in bytes, even if character-length semantics are in use in a datafile....."
As I see this now, there is no way to say "column A starts at "CHARACTER Position pos1" and ends at "Character Position pos2".
I tested with
load data
CHARACTERSET AL32UTF8
LENGTH SEMANTICS CHARACTER
replace ...
in the .ctl file, but when the first character with more than one byte encoding (for example ü ) is in the file, all positions of that record are mixed up.
Is there a way to load these files in UTF8 without changing the file-definition to a column-seperator?
Thanks for any hints - charlyI have not tested this but you should be able to achieve what you want by using LENGTH SEMANTICS CHARACTER and by specifying field lengths (e.g. CHAR(5)) instead of only their positions. You could still use the POSITION(*+n) syntax to skip any separator columns that contain only spaces or tabs.
If the above does not work, an alternative would be to convert all UTF8 files to UTF16 before loading so that they become fixed-width.
-- Sergiusz -
Ignoring constraints with SQL*Loader & triggers
When I load a table with SQL*Loader that has foreign key constraints and a before insert trigger, my foreign key constraints are ignored. The end result is a detail table that contains invalid rows (where some column values do not exist in the master table). If I drop the trigger and just load with SQL*Loader, the foreign key constraints are recognized. Any ideas why this is happening? The trigger simply populates the last updated date and user columns.
I found this from Asktom very nice. It will help u a lot.
http://asktom.oracle.com/pls/ask/f?p=4950:9:1988009758486146475::NO:9:F4950_P9_DISPLAYID:8806498660292 -
Problem with SQL*Loader and different date formats in the same file
DB: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
System: AIX 5.3.0.0
Hello,
I'm using SQL*Loader to import semi-colon separated values into a table. The files are delivered to us by a data provider who concatenates data from different sources and this results in us having different date formats within the same file. For example:
...;2010-12-31;22/11/1932;...
I load this data using the following lines in the control file:
EXECUTIONDATE1 TIMESTAMP NULLIF EXECUTIONDATE1=BLANKS "TO_DATE(:EXECUTIONDATE1, 'YYYY-MM-DD')",
DELDOB TIMESTAMP NULLIF DELDOB=BLANKS "TO_DATE(:DELDOB, 'DD/MM/YYYY')",
The relevant NLS parameters:
NLS_LANGUAGE=FRENCH
NLS_DATE_FORMAT=DD/MM/RR
NLS_DATE_LANGUAGE=FRENCH
If I load this file as is the values loaded into the table are 31 dec 2010 and 22 nov *2032*, aven though the years are on 4 digits. If I change the NLS_DATE_FORMAT to DD/MM/YYYY then the second date value will be loaded correctly, but the first value will be loaded as 31 dec *2020* !!
How can I get both date values to load correctly?
Thanks!
SylvainThis is very strange, after running a few tests I realized that if the year is 19XX then it will get loaded as 2019, and if it is 20XX then it will be 2020. I'm guessing it may have something to do with certain env variables that aren't set up properly because I'm fairly sure my SQL*Loader control file is correct... I'll run more tests :-(
-
How can I load this XML with sql loader ?
Hi,
Im try to load a XML file into a XML table:
Create table TEST of XMLTYPE;
XML file is something like this:
<!DOCTYPE list SYSTEM "myfile.dtd">
<list>
<item id = 1> lot of tags an info here </item>
<item id = 9000> lot of tags an info here </item>
</list>
I dont know how to exactly create an appropriate control file to my XML. All examples I saw in documentation are similar to this one:
this is an example control file
LOAD DATA
INFILE *
INTO TABLE test
APPEND
XMLTYPE (xmldata)
FIELDS
I dont know how to complete this control file. Can anyone help me ?
thank you!
Message was edited by:
pollopoleaWell I found this working code
LOAD DATA
INFILE *
INTO TABLE test APPEND
xmltype(xmldata)
FIELDS
ext_fname filler char(30),
xmldata LOBFILE (ext_fname) TERMINATED BY EOF
BEGINDATA
mifile.xml
mifile2.xml
mifile3.xml
the main diference I found is that when i use:
Insert into test values (XMLType(bfilename('XMLDIR', 'swd_uC000025127.xml'),nls_charset_id('AL32UTF8')));
tag <!DOCTYPE list SYSTEM "myfile.dtd"> is expanded
(I explain at DTD insertion and errors with long DTD entities
Now using sql loader the tag is not modified.
I dont know if is there any difference, between each case. did I lost performance?
Message was edited by:
pollopolea -
Loading XML file using sql*loader (10g)
Hi Everyone....
I have a major problem. I have a 10g database and I need to use sql loader to load a large XML file into the database. This file contains records for many, many customers. This will be done several times and the number of records will vary. I want to use sql loader and load to a staging table, BUT SPLIT THE RECORDS OUT and add a sequence. I am having 2 problems.
In the 10g documentation, when you want to split the records you use the BEGINDATA clause and indicate something (like a 0) for each instance of a record. Well my first file has 3,722 records in it. How do I know the number of records in each XML file?
The second problem is that because this is XML, I thought that I could use ENCLOSED BY. But the start tag has an attribute /counter in it, sql*loader doesnt recognize the starting tag. (i.e.: start tag is: </CustomerRec number=1>
end tag is: </CustomerRec> )
So, I used TERMINATED BY '</CustomerRec>'. This will split out the records, but it will NOT include the ending tag of </CustomerRec> and when I use extract, I receive an XML parsing error.
I have tried to get the ending tag using CONTINUEIF and SKIP. But those options are for "records" and not lines.
Does anyone have any ideas for the 2 problems above. I am at my wits end and I need to finish this ASAP. Any help would be appreciated.
Thank you!Sorry.... here is an example of what my control file looks like. At the end, I have 92 "0", but remember, I have 3722 records in this first file.
LOAD DATA (SKIP 1)
INFILE *
INTO TABLE RETURN_DATA_STG
TRUNCATE
XMLType(xmldata)
FIELDS
(fill FILLER CHAR(1),
loadseq SEQUENCE(MAX,1),
xmldata LOBFILE (CONSTANT F001AB.XML)
TERMINATED BY '</ExtractObject>'
------ ":xmldata||'</ExtractObject>'"
BEGINDATA
0
0
0
0
0
0 -
Problem with SQL*Loader loading long description with carriage return
I'm trying to load new items into mtl_system_items_interface via a concurrent
program running the SQL*Loader. The load is erroring due to not finding a
delimeter - I'm guessing it's having problems with the long_description.
Here's my ctl file:
LOAD
INFILE 'create_prober_items.csv'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "TRIM(:LONG_DESCRIPTION)")
Here's a sample record from the csv file:
1,1,CREATE,0,546,03,B00-100289,PROBEHEAD PH100 COMPLETE/ VACUUM/COAX ,"- Linear
X axis, Y,Z pivots
- Movement range: X: 8mm, Y: 6mm, Z: 25mm
- Probe tip pressure adjustable contact
- Vacuum adapter
- With shielded arm
- Incl. separate miniature female HF plug
The long_description has to appear as:
- something
- something
It can't appear as:
-something-something
Here's the errors:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Record 2: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
ORGANIZATION_ID.
Column not found before end of logical record (use TRAILING NULLCOLS)
I've asked for help on the Metalink forum and was advised to add trailing nullcols to the ctl so the ctl line now looks like:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
I don't think this was right because now I'm getting:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Thanks for any help that may be offered.
-TracyLOAD
INFILE 'create_prober_items.csv'
CONTINUEIF LAST <> '"'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "REPLACE (TRIM(:LONG_DESCRIPTION), '-', CHR(10) || '-')") -
hi,
when i am trying to load data using sql loader i come across the following errors and i was unable to load data into my staging tables.
please explain me what are they and how to rectify them.
SQL*Loader-524: partial record found at end of datafile (D:\oracle\prodappl\fnd\11.5.0\bin\sri3.ctl)
Program exited with status 3
SQL*Loader-500: Unable to open file (D:\oracle\prodappl\fnd\11.5.0\bin\sr2.ctl)
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
please tell me where can i get complete information on errors occured during loading of data.
please tell me whether there is any list of common errors encountered in sql*oader
thanks in advance
siddam
Message was edited by:
siddam
nullBoth of datafile and control file should have line feed characters at the end.
Check this site :
http://radio.weblogs.com/0108008/stories/2003/04/11/experienceWithSqlLoader.html
Maybe you are looking for
-
How can I stop laptop freeze/spinning wheel every time I click or type etc?
I sporadically get momentary freeze and or spinning wheel when I try to do anything on the laptop. Sometimes it works fine but the majority of the time it stops on every click. The problem has been persistant for a week or two, possibly related to a
-
Virtualization problem with Win 8.1 pro
System: Packard bell Windows 8.1 Pro (x64) Intel Core i5-2320 CPU I am having problems installing Intel HAXM (for Android emulation). I get this error immediately: "This computer does not support Intel Virtualization Technology (VT-x) ..." Running th
-
Iphoto version 7.1.5 auto start problem
Every time I turn on my imac (Mac OS X version 10.5.8), iphoto (version 7.1.5) automatically starts up. How do I stop iphoto from opening every time I start my computer? Thanks
-
Hi, I've problem with transport SOAP DataSource. I've created InfoSource, Create DataSource with SOAP, etc. always write package and request. Everything works OK. But after import from DEV to QAS i've error: "Extract structure ZOXPBW0027 is not activ
-
My macbook pro displays a folder with a question mark and won't open. What does this mean and how can I fix it?