Urgent: SQL*Loader-562: record too long
with sqlldr73 , no problem
but with sqlldr of 817 : i've got the problem !!??
any help please ...
Mourad from Paris
Hi Sandeep,
Oracle guru Dave Moore has many sample SQL*Loader control files published:
http://www.google.com/search?&q=oracle+moore+sql%2aloader
Here is a simple sample control file to load many tables:
http://www.dba-oracle.com/t_sql_loader_multiple_tables_sqlldr.htm
Hope this helps. . .
Donald K. Burleson
Oracle Press author
Similar Messages
-
Data record too long to be imported (0 or 5000)
Dear Expert
in LSMW While display the read record this is the error comes
"Data record too long to be imported (0 or >5000)"
how to rectify this?
but system allows for further steps and upload the maser record
Regards
KaranHi,
Hope the same question is already answered in the thread: Error in uploading master through LSMW
Please check and revert back if its not solved.
Regards,
AKPT -
(urgent) SQL*Loader Large file support in O734
hi there,
i have the following sqlloader error when trying to upload data file(s),
each has size 10G - 20G to Oracle 734 DB on SunOS 5.6 .
>>
SQL*Loader-500: Unable to open file (..... /tstt.dat)
SVR4 Error: 79: Value too large for defined data type
<<
i know there's bug fix for large file support in Oracle 8 -
>>
Oracle supports files over 2GB for the oracle executable.
Contact Worldwide Support for information about fixes for bug 508304,
which will add large file support for imp, exp, and sqlldr
<<
however, really want to know if any fix for Oracle 734 ?
thx.Example
Control file
C:\DOCUME~1\MAMOHI~1>type dept.ctl
load data
infile dept.dat
into table dept
append
fields terminated by ',' optionally enclosed by '"'
trailing nullcols
(deptno integer external,
dname char,
loc char)
Data file
C:\DOCUME~1\MAMOHI~1>type dept.dat
50,IT,VIKARABAD
60,INVENTORY,NIZAMABAD
C:\DOCUME~1\MAMOHI~1>
C:\DOCUME~1\MAMOHI~1>dir dept.*
Volume in drive C has no label.
Volume Serial Number is 9CCC-A1AF
Directory of C:\DOCUME~1\MAMOHI~1
09/21/2006 08:33 AM 177 dept.ctl
04/05/2007 12:17 PM 41 dept.dat
2 File(s) 8,043 bytes
0 Dir(s) 1,165 bytes free
Intelligent sqlldr command
C:\DOCUME~1\MAMOHI~1>sqlldr userid=hary/hary control=dept.ctl
SQL*Loader: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:26 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 2
C:\DOCUME~1\MAMOHI~1>sqlplus hary/hary
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:37 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
As I am appending I got two extra rows. One department in your district and another in my district :)
SQL> select * from dept;
DEPTNO DNAME LOC
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 IT VIKARABAD
60 INVENTORY NIZAMABAD
6 rows selected.
SQL> -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
Urgent :SQL Loader Arabic Character Set Issue
HI all,
I am loading arabic characters into my database using SQL Loader using a fixed length data file. I had set my characterset and NLS_LANG set to UTF8.When I try to load the chararacter 'B' in arabic data i.e. ' لا ' , it gets loaded as junk in the table. All other characters are loaded correctly. Please help me in this issue and its very urgent.
Thanks,
KarthikHi,
Thanks for the responses.
Even after setting the characterset to arabic and the problem continues to persist. This problem occurs only with the character "b".
Please find my sample control file,input file and nls_parameters below:
My control file
LOAD DATA
characterset UTF8
LENGTH SEMANTICS CHAR
BYTEORDER little endian
INFILE 'C:\sample tape files\ARAB.txt'
replace INTO TABLE user1
TRAILING NULLCOLS
name POSITION(1:2) CHAR(1),
id POSITION (3:3) CHAR(1) ,
salary POSITION (4:5) CHAR(2)
My Input file - Fixed Format
?a01
??b02
?c03
The ? indicates arabic characters.Arabic fonts must be installed to view them.
NLS_PARAMETERS
PARAMETER VALUE
NLS_LANGUAGE ARABIC
NLS_TERRITORY UNITED ARAB EMIRATES
NLS_CURRENCY ?.?.
NLS_ISO_CURRENCY UNITED ARAB EMIRATES
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD/MM/RR
NLS_DATE_LANGUAGE ARABIC
NLS_SORT ARABIC
NLS_TIME_FORMAT HH12:MI:SSXFF PM
NLS_TIMESTAMP_FORMAT DD/MM/RR HH12:MI:SSXFF PM
NLS_TIME_TZ_FORMAT HH12:MI:SSXFF PM TZR
NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH12:MI:SSXFF PM TZR
NLS_DUAL_CURRENCY ?.?.
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS CHAR
NLS_NCHAR_CONV_EXCP FALSE -
Sql loader - skip record question
I am running Oracle 9 and using sql loader to import text file into table. Can sql loader skips the record which contain blank line or carriage return? Do I need to set up with options? Please advise me how. Thanks.
http://docs.oracle.com/cd/B10500_01/server.920/a96652/ch05.htm
http://www.orafaq.com/wiki/SQL*Loader_FAQ -
Does sql loader erase records?
When I use the sql loader for an existing table, does it clear the table of records and copy everything over...or does it simply add records? I don't want to go into the table and change everything w/o first finding out. Thanks for the help.
Cary =)It depends on what you've specified in the control file. You can specify commands like insert, replace, truncate and append. I suggest you read the Oracle Utilities manual for your particular version for more information. There's also some good examples in the manuals. You can find Oracle manuals online at http://tahiti.oracle.com.
I don't want to go into the table and change everything w/o first finding out. I hope this statement doesn't mean you try things in a production environment without testing them first. -
Hello All,
I want to load records into a table using SQL Loader. I want to do the following(using a column in the data file),
1. If the flag is I insert the record.
2. If the flag is U update the record.
3. If the flag is D delete the record.
What are the options available in SQL Loader to achieve this.
Thanks,
Kannan.Hi Kannan,
Kannan B wrote:
Hello All,
I want to load records into a table using SQL Loader. I want to do the following(using a column in the data file),
1. If the flag is I insert the record.
2. If the flag is U update the record.
3. If the flag is D delete the record.
What are the options available in SQL Loader to achieve this.
Thanks,
Kannan.You have 2 solutions to acheive the result.
1.If you are running the sql loader on unix environment,then i suggest you to use AWK script to filter out the records,which you need (for Insertion/Updation) and discard the records which is of flag D in your data file.
For Example
If the name of your control file is load.txt,with file delimitter as "|" (pipe) the flag column is at 4th position then you can use awk script .
/home/bin/cat load.txt|nawk -F "|" '{ if ($4=="I" || $4=="U") print $0 }'|more2. Just load all the data onto table and filter out based upon the flag in table(Insertion/Updation/Deletion).
Hope this helps..
Regards,
Achyut -
Nawk message input record too long
I am running a shell script which uses nawk.
I am processing a huge input file.
I get a message that an input record is too long.
What am I supposed to do to indicate to nawk the max record line length ?
My e-mail is [email protected]
Any info will be appreciatedThe 6144 limit can be verified with:
% perl -e 'print "A"x6145, "\n"' | nawk '{ print length($0); }'
nawk: input record `AAAAAAAAAAAAAAAAAAAA...' too long
source line number 1
(it works, when you change the perl print statment to
6144 x "A").
Quick solution could be to use gawk instead of nawk
(/opt/sfw/bin/gawk, if you have the Solaris 8 companion
CD installed). -
Windows loading time is too long!
After I installed Desktop manager ver4.6 to my PC . PC loading time is very long,
It takes 3 minutes to load desktop screen when turning on PC . (without Desktop Manager it takes one and 30 minutes)
Does Anyone have same issue ? OS: XP pro
After finish loading I do not feel slow.
please help me out .
Solved!
Go to Solution.Do you have the Desktop Manager set to load when the computer boots up? If so you can disable that by going into Add/Remove Programs, select the Desktop Manager and hit the Change/Remove button. Hit Modify, select the BlackBerry Desktop Manager and hit Next. You want to keep going through the wizard to pick the same options you did when installing it. When you're on the last screen before you hit 'Install' uncheck the box that says to start the Desktop Manager when the computer starts.
If someone has been helpful please consider giving them kudos by clicking the star to the left of their post.
Remember to resolve your thread by clicking Accepted Solution. -
I am using sqlloader utility to load data from a CSV file in to a table.
My .ctl file looks as below
------- 8< -------
options (errors=5,SILENT=(HEADER, FEEDBACK),direct=true)
load data
infile "mytest.csv"
discardmax 0
into table owneruser.MY_TABLE
fields terminated by "," optionally enclosed by "##"
(ID, ID1, VAL, VAL2)
------- 8< -------
sqlldr tool is run with this ctl file by another database user who has sufficient privileges to insert this data in to mytest.csv has about 400000 entries each entry maps to one row in MY_TABLE. Before loading data using the sqlldr MY_TABLE is truncated.
In mytest.csv, the value for ID field is a number starting at 1 which keeps incrementing by 1 for the next entry. The records are ordered by ID in the csv file.
After loading the data using sqlldr, when we query MY_TABLE (select * from MY_TABLE), so far the records are returned in same order in which they were inserted (i.e. ordered by ID). But off late they are not being returned in random order. This happens only on one database instance. On other test instances the the resultset is ordered. I agree that the only way the order can be guaranteed is by using the ORDER BY clause.
But, I was wondering why this has worked even when ORDER BY is not used.
This is the only way in which MY_TABLE is manipulated. Rest all use it only for querying.
ID is the primary key column in MY_TABLE and there is an index on (ID, ID1).
Thanks in advance.
SThere are any number of reasons that the data would be coming back in a different order since you're not using an ORDER BY. My guess is that the most likely reason is that you have one or more extents in your table that is physically before another extent that it is logically after, in which case a full scan would read that extent first. You may also be seeing differences in how ASSM happens to choose which block to insert into, in the use of parallelism, etc.
Justin -
Sql loader (catching record length error)
Guys is there any command in sqlldr that can catch record length error (less or more than certain length).I am using java to execute my sqlldr and would like to know if it is possible to catch those error.
thanks
Manohar.Use CHAR instead aof VARCHAR
LOAD DATA
INFILE *
APPEND INTO TABLE test
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
first_id,
second_id,
third_id,
language_code,
display_text CHAR(2000)
)From the docu:
A VARCHAR field is a length-value datatype.
It consists of a binary length subfield followed by a character string of the specified length.
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76955/ch05.htm#20324 -
i have files which hav input record greater ten 12000 ... what is the work around
Try gnu awk. It has no record length limits.
-
I can't get into a development site that colleagues in the office have no difficulty with
what is your web page?
what is your firefox version?
what is your operative system? -
Hello All,
Is there a way I can add line-number/record-number into my tables. For example if my input file has 10 records, I want 10 rows with record_number=1-10?
Need your help!
ThanksMaybe (s)he wasn't referring to "sequence" the database object ... rather, the "sequence" parameter for SQL*Loader.
Anyway, I think you need RECNUM ... search the doco for "RECNUM" ... close by you'll find explanation on what "SEQUENCE", the SQL*Loader parameter, does too.
Maybe you are looking for
-
No DVD templates in Premiere Elements 2
I have been using PE2 with good results for a couple of years now on my old PC but recently upgraded my hardware and transferred PE2 onto my new machine (still running XP Pro as did my old PC). In my latest project, the first on this machine, I came
-
My e-mail software will not receive e-mails. I am only able to send. I have had this program for a few years and am just now having problems with it.
-
AE How to Change the color of a movie clip?
I have a movie clip of a White Show Avalanche. How can I change the color from White to another earthly color. I would like to post 2 images, one still of the white avalanche and one still image of the color or look I am after. Can you post or attach
-
Issue on solaris platform for JNI
hi all, I am solaris platform. I am passing string EFT�DEG from java to c++. when I convert it in JNI code using GetStringUTFChars it changes to EFT�.DEG I want to retain same character in c++ code ie. EFT�DEG Can any one help me.
-
How to retain page Scroll position after post back
Hi, I have some long jsf web pages that requires user to user vertical scroll to access some fields. Now the problem I face is that, there are some fields at the bottom of the page, on click of which i need to do a postback, call valueChangeListner i