Issue with SQL Loader and tha last field
Hi
I have a data file to load . The Load is running fine. But the last record comes with a a junk character. when loaded into the table.
when we open the file it shows nothing.
I tried replacing ^M and chr(10) but of no use.
can you please asssit men in this.. We have a go live soon
You need to provide us more details...
SQL and PL/SQL FAQ
By the way:
Have you considered using external tables?
They have more benefits than SQL*Loader...
http://www.oracle-developer.net/display.php?id=204
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6611962171229
http://www.oracle-base.com/articles/9i/ExternalTables9i.php
Similar Messages
-
Import and process larger data with SQL*Loader and Java resource
Hello,
I have a project to import data from a text file in a schedule. A lager data, with nearly 20,000 record/1 hours.
After that, we have to analysis the data, and export the results into a another database.
I research about SQL*Loader and Java resource to do these task. But I have no experiment about that.
I'm afraid of the huge data, Oracle could be slowdown or the session in Java Resource application could be timeout.
Please tell me some advice about the solution.
Thank you very much.With '?' mark i mean " How i can link this COL1 with column in csv file ? "
Attilio -
Problem import csv file with SQL*loader and control file
I have a *csv file looking like this:
E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
I want to import this csv file to this table:
create table artikel (artnr varchar2(10), namn varchar2(25), fp_storlek number, datum date, mtrlid varchar2(5), pris number);
My controlfile looks like this:
LOAD DATA
INFILE 'e:\test.csv'
INSERT
INTO TABLE ARTIKEL
FIELDS TERMINATED BY ';'
TRAILING NULLCOLS
(ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
I cant get sql*loader to import the last column(pris) as I want. It ignore my decimal point which in this case is "," and not "." maybe this is the problem. If the decimal point is the problem how can I get oracle to recognize "," as a decimal point??
the result from the import now, is that a decimal number (37,2) becomes 372 in the tableSet NLS_NUMERIC_CHARACTERS environment variable at OS level, before running SqlLoader :
$ cat test.csv
E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
$ cat artikel.ctl
LOAD DATA
INFILE 'test.csv'
replace
INTO TABLE ARTIKEL
FIELDS TERMINATED BY ';'
TRAILING NULLCOLS
(ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
$ sqlldr scott/tiger control=artikel
SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:01 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Commit point reached - logical record count 6
$ sqlplus scott/tiger
SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:11 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> select * from artikel;
ARTNR NAMN FP_STORLEK DATUM MTRLI PRIS
E0100070 EKKJ 1X10/10 1 KV 1 16/06/2003 01C 75
E0100075 EKKJ 1X10/10 1 KV 500 16/06/2003 01C 67
E0100440 EKKJ 2X2,5/2,5 1 KV 1 16/06/2003 01C 372
E0100445 EKKJ 2X2,5/2,5 1 KV 500 16/06/2003 01C 332
E0100450 EKKJ 2X4/4 1 KV 1 16/06/2003 01C 53
E0100455 EKKJ 2X4/4 1 KV 500 16/06/2003 01C 471
6 rows selected.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
$ export NLS_NUMERIC_CHARACTERS=',.'
$ sqlldr scott/tiger control=artikel
SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:41 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Commit point reached - logical record count 6
$ sqlplus scott/tiger
SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:45 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> select * from artikel;
ARTNR NAMN FP_STORLEK DATUM MTRLI PRIS
E0100070 EKKJ 1X10/10 1 KV 1 16/06/2003 01C 75
E0100075 EKKJ 1X10/10 1 KV 500 16/06/2003 01C 67
E0100440 EKKJ 2X2,5/2,5 1 KV 1 16/06/2003 01C 37,2
E0100445 EKKJ 2X2,5/2,5 1 KV 500 16/06/2003 01C 33,2
E0100450 EKKJ 2X4/4 1 KV 1 16/06/2003 01C 53
E0100455 EKKJ 2X4/4 1 KV 500 16/06/2003 01C 47,1
6 rows selected.
SQL> Control file is exactly as yours, I just put replace instead of insert. -
SQL*Loader and "Variable length field was truncated"
Hi,
I'm experiencing this problem using SQL*Loader: Release 8.1.7.0.0
Here is my control file (it's actually split into separate control and data files, but the result is the same)
LOAD DATA
INFILE *
APPEND INTO TABLE test
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
first_id,
second_id,
third_id,
language_code,
display_text VARCHAR(2000)
begindata
2,1,1,"eng","Type of Investment Account"
The TEST table is defined as:
Name Null? Type
FIRST_ID NOT NULL NUMBER(4)
SECOND_ID NOT NULL NUMBER(4)
THIRD_ID NOT NULL NUMBER(4)
LANGUAGE_CODE NOT NULL CHAR(3)
DISPLAY_TEXT VARCHAR2(2000)
QUESTION_BLOB BLOB
The log file displays:
Record 1: Warning on table "USER"."TEST", column DISPLAY_TEXT
Variable length field was truncated.
And the results of the insert are:
FIRST_ID SECOND_ID THIRD_ID LANGUAGE_CODE DISPLAY_TEXT
2 1 1 eng ype of Investment Account"
The language_code field is imported correctly, but display_text keeps the closing delimiter, and loses the first character of the string. In other words, it is interpreting the enclosing double quote and/or the delimiter, and truncating the first two characters.
I've also tried the following:
LOAD DATA
INFILE *
APPEND INTO TABLE test
FIELDS TERMINATED BY '|'
first_id,
second_id,
third_id,
language_code,
display_text VARCHAR(2000)
begindata
2|1|1|eng|Type of Investment Account
In this case, display_text is imported as:
pe of Investment Account
In the log file, I get this table which seems odd as well - why is the display_text column shown as having length 2002 when I explicitly set it to 2000?
Column Name Position Len Term Encl Datatype
FIRST_ID FIRST * | O(") CHARACTER
SECOND_ID NEXT * | O(") CHARACTER
THIRD_ID NEXT * | O(") CHARACTER
LANGUAGE_CODE NEXT 3 | O(") CHARACTER
DISPLAY_TEXT NEXT 2002 VARCHAR
Am I missing something totally obvious in my control and data files? I've played with various combinations of delimiters (commas vs '|'), trailing nullcols, optional enclosed etc.
Any help would be greatly appreciated!Use CHAR instead aof VARCHAR
LOAD DATA
INFILE *
APPEND INTO TABLE test
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
first_id,
second_id,
third_id,
language_code,
display_text CHAR(2000)
)From the docu:
A VARCHAR field is a length-value datatype.
It consists of a binary length subfield followed by a character string of the specified length.
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76955/ch05.htm#20324 -
Special character loading issue with SQL Loader
Hi,
I am getting the special characters as part of my input file to SQL loader. The problem is that because the length of the input string (having special characters) is more than 1000 in input data, the converted bytes length is becoming more than 4000 and the defined substrb() function in control file is not working for it and the target 'ADDR_LINE' column in table 'TEST_TAB' is defined Varchar2(1024 char).
Following is a sample ctl file and data i am using for it.
LOAD DATA
CHARACTERSET UTF8
INFILE 'updated.txt'
APPEND INTO TABLE TEST_TAB
FIELDS TERMINATED BY "|~"
TRAILING NULLCOLS
INDX_WORD ,
ADDR_LINE "SUBSTRB(:ADDR_LINE , 1, 1000)",
CITY
following is the actual data which i am receiving as part of input file to sql loader for the ADDR_LINE column:
'RUA PEDROSO ALVARENGA, 1284 AƒAƒA‚AƒAƒA‚A‚AƒAƒAƒA‚A‚AƒA‚A‚AƒAƒAƒA‚AƒAƒA‚A‚A‚AƒAƒA‚A‚AƒA‚A‚AƒAƒAƒA‚AƒAƒA‚A‚AƒAƒAƒA‚A‚AƒA‚A‚A‚AƒAƒA‚AƒAƒA‚A‚A‚AƒAƒA‚A‚AƒA‚A‚A‚AƒAƒA‚AƒAƒA‚A‚AƒAƒAƒA‚A‚AƒA‚A‚AƒAƒAƒA‚AƒAƒA‚A‚A‚AƒAƒA‚A‚AƒA‚A‚A‚AƒAƒA‚AƒAƒA‚A‚AƒAƒAƒA‚A‚AƒA‚A‚A‚AƒAƒA‚AƒAƒA‚A‚A‚AƒAƒA‚A‚AƒA‚A‚A– 10 ANDAR'
My database is having following settings.
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET AL32UTF8
NLS_COMP BINARY
NLS_CURRENCY $
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_DUAL_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_LANGUAGE AMERICAN
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_SORT BINARY
NLS_TERRITORY AMERICA
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
Any help in this regard will be much appreciated.
Thanks in advance.Is the data file created directly on the Unix server ? If not, how does it get to the Unix server ? And where does the file come from ? Is the UTF8 locale installed on the Unix server (check with the Unix sysadmin) ?
HTH
Srini -
Problem with SQL*Loader and different date formats in the same file
DB: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
System: AIX 5.3.0.0
Hello,
I'm using SQL*Loader to import semi-colon separated values into a table. The files are delivered to us by a data provider who concatenates data from different sources and this results in us having different date formats within the same file. For example:
...;2010-12-31;22/11/1932;...
I load this data using the following lines in the control file:
EXECUTIONDATE1 TIMESTAMP NULLIF EXECUTIONDATE1=BLANKS "TO_DATE(:EXECUTIONDATE1, 'YYYY-MM-DD')",
DELDOB TIMESTAMP NULLIF DELDOB=BLANKS "TO_DATE(:DELDOB, 'DD/MM/YYYY')",
The relevant NLS parameters:
NLS_LANGUAGE=FRENCH
NLS_DATE_FORMAT=DD/MM/RR
NLS_DATE_LANGUAGE=FRENCH
If I load this file as is the values loaded into the table are 31 dec 2010 and 22 nov *2032*, aven though the years are on 4 digits. If I change the NLS_DATE_FORMAT to DD/MM/YYYY then the second date value will be loaded correctly, but the first value will be loaded as 31 dec *2020* !!
How can I get both date values to load correctly?
Thanks!
SylvainThis is very strange, after running a few tests I realized that if the year is 19XX then it will get loaded as 2019, and if it is 20XX then it will be 2020. I'm guessing it may have something to do with certain env variables that aren't set up properly because I'm fairly sure my SQL*Loader control file is correct... I'll run more tests :-(
-
Problems with SQL*Loader and java Runtime
Hi. I'm trying to start SQL*Loader on Oracle 8 by using Runtime class in this way:
try{
Process p = Runtime.getRuntime().exec( "c:\oracle\ora81\bin\sqlldr.exe parfile=c:\parfile\carica.par" );
/*If i insert this line my application never stops*/
p.waitFor();
}catch( Exception e ){
. I have seen that if lines to insert are less then 400 all works very fine, but if lines number is greater than 400, all data go in my tables but my log file is opened always in writing.Can anyone tell me why?
ThanksJust a note if the executable "sqlldr.exe" does not stop (quit running) by itself the p.waitFor() will wait for ever.
-
There is an issue with sql loader
i have a "|" separated file the 1st column of which signifies an individual record type . The record types are variable length ie record type 1 has around 100 chars while 2 has around 50 etc...
We plan to load this data in multiple tables using a loader script. The loader script works fine but loads data in only the first table . The other record types get rejected .
Some sample records from the file is as follows
1|1001725|N|2003-10-01|CON|2003-10-07|0020OTOFF|2222222|D|1229.29|983.53|.00|2|0|0|0
2|1001175|1|N|D|XT|4444*1111|549.60
Please note 1| and 2| indicate the record types mentioned above .
our control file looks like this
LOAD DATA
into table summ_order
when (1) = '1'
fields terminated by "|"
Record_Identifier,
OPS_Order_ID,
Record_Status,
Transaction_Date,
Booking_Status ,
Departure_Date ,
Branch_Code ,
Consultant_Ref ,
DC_Indicator ,
Total_GrossCost ,
Total_NetCost ,
Total_Discounts ,
No_of_Adults ,
No_of_Senior_Citizens,
No_of_Children ,
No_of_Infants
into table pay_order1
when (1) = '2'
fields terminated by "|"
trailing nullcols
Record_Identifier,
OPS_Order_ID,
Payment_Record_ID,
Record_Status,
DC_Indicator,
Payment_Method,
Payment_Ref,
Payment_Amount
The log shows the first table is populated succ. while the record type 2 shows the errors
Record 1: Discarded - all columns null.
Record 26: Discarded - all columns null.
Record 27: Discarded - all columns null.
Record 28: Discarded - all columns null.
Record 29: Discarded - all columns null.
Record 30: Discarded - all columns null.
Record 31: Discarded - all columns null.
Record 32: Discarded - all columns null.
Record 33: Discarded - all columns null.
If i remove the "trailing nullcols" criteria from the ctl file the error says something like end of logical record/column not found...
currently i have prepared multiple loader scripts for the same but this will become an issue once multiple record types are introduced in the future.
please note the record types are of variable length .
could somebody please help me
Thank YouSee [Benefits of Using Multiple INTO TABLE Clauses|http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_control_file.htm#i1005798] and [Loading Data into Multiple Tables|http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_control_file.htm#i1005894]. You need to add a position parameter:...
into table pay_order1
when (1) = '2'
fields terminated by "|"
trailing nullcols
Record_Identifier position(1),
OPS_Order_ID, -
HELP: SQL*LOADER AND Ref Column
Hallo,
I have already posted and I really need help and don't come further with this
I have the following problem. I have 2 tables which I created the following way:
CREATE TYPE gemark_schluessel_t AS OBJECT(
gemark_id NUMBER(8),
gemark_schl NUMBER(4),
gemark_name VARCHAR2(45)
CREATE TABLE gemark_schluessel_tab OF gemark_schluessel_t(
constraint pk_gemark PRIMARY KEY(gemark_id)
CREATE TYPE flurstueck_t AS OBJECT(
flst_id NUMBER(8),
flst_nr_zaehler NUMBER(4),
flst_nr_nenner NUMBER(4),
zusatz VARCHAR2(2),
flur_nr NUMBER(2),
gemark_schluessel REF gemark_schluessel_t,
flaeche SDO_GEOMETRY
CREATE TABLE flurstuecke_tab OF flurstueck_t(
constraint pk_flst PRIMARY KEY(flst_id),
constraint uq_flst UNIQUE(flst_nr_zaehler,flst_nr_nenner,zusatz,flur_nr),
flst_nr_zaehler NOT NULL,
flur_nr NOT NULL,
gemark_schluessel REFERENCES gemark_schluessel_tab
Now I have data in the gemark_schluessel_tab which looks like this (a sample):
1 101 Borna
2 102 Draisdorf
Now I wanna load data in my flurstuecke_tab with SQL*Loader and there I have problems with my ref column gemark_schluessel.
One data record looks like this in my file (it is without geometry)
1|97|7||1|1|
If I wanna load my data record, it does not work. The reference (the system generated OID) should be taken from gemark_schluessel_tab.
LOAD DATA
INFILE *
TRUNCATE
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE FLURSTUECKE_TAB
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS (
flst_id,
flst_nr_zaehler,
flst_nr_nenner,
zusatz,
flur_nr,
gemark_schluessel REF(CONSTANT 'GEMARK_SCHLUESSEL_TAB',GEMARK_ID),
gemark_id FILLER
BEGINDATA
1|97|7||1|1|
Is there a error I made?
Thanks in advance
Tigmultiple duplicate threads:
to call an oracle procedure and sql loader in an unix script
Re: Can some one help he sql loader issue. -
Sql@loader-704 and ORA-12154: error messages when trying to load data with SQL Loader
I have a data base with two tables that is used by Apex 4.2. One table has 800,000 records . The other has 7 million records
The client recently upgraded from Apex 3.2 to Apex 4.2 . We exported/imported the data to the new location with no problems
The source of the data is an old mainframe system; I needed to make changes to the source data and then load the tables.
The first time I loaded the data i did it from a command line with SQL loader
Now when I try to load the data I get this message:
sql@loader-704 Internal error: ulconnect OCISERVERATTACH
ORA-12154: tns:could not resolve the connect identifier specified
I've searched for postings on these error message and they all seem to say that SQL Ldr can't find my TNSNAMES file.
I am able to connect and load data with SQL Developer; so SQL developer is able to find the TNSNAMES file
However SQL Developer will not let me load a file this big
I have also tried to load the file within Apex (SQL Workshop/ Utilities) but again, the file is too big.
So it seems like SQL Loader is the only option
I did find one post online that said to set an environment variable with the path to the TNSNAMES file, but that didn't work..
Not sure what else to try or where to look
thanksHi,
You must have more than one tnsnames file or multiple installations of oracle. What i suggest you do (as I'm sure will be mentioned in ed's link that you were already pointed at) is the following (* i assume you are on windows?)
open a command prompt
set TNS_ADMIN=PATH_TO_DIRECTOT_THAT_CONTAINS_CORRECT_TNSNAMES_FILE (i.e. something like set TNS_ADMIN=c:\oracle\network\admin)
This will tell oracle use the config files you find here and no others
then try sqlldr user/pass@db (in the same dos window)
see if that connects and let us know.
Cheers,
Harry
http://dbaharrison.blogspot.com -
SQL*Loader and unzipping with uploaded file
hi, everybody
i was trying to integrate some functionalities, today separated. i made a simple Apex page that uploads a file, and now i need to process that file.
is there any way to process that uploaded file and load it onto a table, just like i'd do with Sql*Loader?
in the same way, can i unzip an uploaded zip file, just like Oracle Portal 3.0.9 used to do in its contect areas with zip itens?
thanksHi, Ivo
about ult_compress, ok. i'll check it out.
talking about uploaded files, the only thing i've found were:
a) download an uploaded file
b) use an uploaded image to integrate in a web application.
unfortunately, i didn't find anything that could help me process an uploaded text file the way i do with SQL*Loader.
i'll keep on trying
thanx -
Sql*loader and sdo_geometry field
Is there a way to load longitude/latitude coordinates from a flat file directly into an sdo_geometry field with sql*loader? If so, does anyone have an example? I just want to create point features.
Thanks,
DavidDavid,
Section 4.1.2 of the Spatial User Guide has examples for this:
LOAD DATA
INFILE *
TRUNCATE
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE POINT
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS (
GID INTEGER EXTERNAL,
GEOMETRY COLUMN OBJECT
SDO_GTYPE INTEGER EXTERNAL,
SDO_POINT COLUMN OBJECT
(X FLOAT EXTERNAL,
Y FLOAT EXTERNAL)
BEGINDATA
1| 2001| -122.4215| 37.7862|
2| 2001| -122.4019| 37.8052|
3| 2001| -122.426| 37.803|
4| 2001| -122.4171| 37.8034|
5| 2001| -122.416151| 37.8027228|
You need to add the SRID field if you want to add the SRID to the geometry.
Or you can do that with SQL after the data is loaded into the DB.
siva -
Problem : Load PDF or similiar files( stored at operating system) into an oracle table using SQl*Loader .
and than Unload the files back from oracle tables to prevoius format.
I 've used SQL*LOADER .... " sqlldr " command as :
" sqlldr scott/[email protected] control=c:\sqlldr\control.ctl log=c:\any.txt "
Control file is written as :
LOAD DATA
INFILE 'c:\sqlldr\r_sqlldr.txt'
REPLACE
INTO table r_sqlldr
Fields terminated by ','
id sequence (max,1) ,
fname char(20),
data LOBFILE(fname) terminated by EOF )
It loads files ( Pdf, Image and more...) that are mentioned in file r_sqlldr.txt into oracle table r_sqlldr
Text file ( used as source ) is written as :
c:\kalam.pdf,
c:\CTSlogo1.bmp
c:\any1.txt
after this load ....i used UTL_FILE to unload data and write procedure like ...
CREATE OR REPLACE PROCEDURE R_UTL AS
l_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER ;
l_pos INTEGER := 1;
l_blob BLOB;
l_blob_len INTEGER;
BEGIN
SELECT data
INTO l_blob
FROM r_sqlldr
where id= 1;
l_blob_len := DBMS_LOB.GETLENGTH(l_blob);
DBMS_OUTPUT.PUT_LINE('blob length : ' || l_blob_len);
IF (l_blob_len < 32767) THEN
l_amount :=l_blob_len;
ELSE
l_amount := 32767;
END IF;
DBMS_LOB.OPEN(l_blob, DBMS_LOB.LOB_READONLY);
l_file := UTL_FILE.FOPEN('DBDIR1','Kalam_out.pdf','w', 32767);
DBMS_OUTPUT.PUT_LINE('File opened');
WHILE l_pos < l_blob_len LOOP
DBMS_LOB.READ (l_blob, l_amount, l_pos, l_buffer);
DBMS_OUTPUT.PUT_LINE('Blob read');
l_pos := l_pos + l_amount;
UTL_FILE.PUT_RAW(l_file, l_buffer, TRUE);
DBMS_OUTPUT.PUT_LINE('writing to file');
UTL_FILE.FFLUSH(l_file);
UTL_FILE.NEW_LINE(l_file);
END LOOP;
UTL_FILE.FFLUSH(l_file);
UTL_FILE.FCLOSE(l_file);
DBMS_OUTPUT.PUT_LINE('File closed');
DBMS_LOB.CLOSE(l_blob);
EXCEPTION
WHEN OTHERS THEN
IF UTL_FILE.IS_OPEN(l_file) THEN
UTL_FILE.FCLOSE(l_file);
END IF;
DBMS_OUTPUT.PUT_LINE('Its working at last');
END R_UTL;
This loads data from r_sqlldr table (BOLBS) to files on operating system ,,,
-> Same procedure with minor changes is used to unload other similar files like Images and text files.
In above example : Loading : 3 files 1) Kalam.pdf 2) CTSlogo1.bmp 3) any1.txt are loaded into oracle table r_sqlldr 's 3 rows respectively.
file names into fname column and corresponding data into data ( BLOB) column.
Unload : And than these files are loaded back into their previous format to operating system using UTL_FILE feature of oracle.
so PROBLEM IS : Actual capacity (size ) of these files is getting unloaded back but with quality decreased. And PDF file doesnt even view its data. means size is almot equal to source file but data are lost when i open it.....
and for images .... imgaes are getting loaded an unloaded but with colors changed ....
Also features ( like FFLUSH ) of Oracle 've been used but it never worked
ANY SUGGESTIONS OR aLTERNATE SOLUTION TO LOAD AND UNLOAD PDFs through Oracle ARE REQUESTED.
------------------------------------------------------------------------------------------------------------------------Thanks Justin ...for a quick response ...
well ... i am loading data into BLOB only and using SQL*Loader ...
I've never used dbms_lob.loadFromFile to do the loads ...
i 've opend a file on network and than used dbms_lob.read and
UTL_FILE.PUT_RAW to read and write data into target file.
actually ...my process is working fine with text files but not with PDF and IMAGES ...
and your doubt of ..."Is the data the proper length after reading it in?" ..m not getting wat r you asking ...but ... i think regarding data length ..there is no problem... except ... source PDF length is 90.4 kb ..and Target is 90.8 kb..
thats it...
So Request u to add some more help ......or should i provide some more details ?? -
Need help with SQL*Loader not working
Hi all,
I am trying to run SQL*Loader on Oracle 10g UNIX platform (Red Hat Linux) with below command:
sqlldr userid='ldm/password' control=issue.ctl bad=issue.bad discard=issue.txt direct=true log=issue.log
And get below errors:
SQL*Loader-128: unable to begin a session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Can anyone help me out with this problem that I am having with SQL*Loader? Thanks!
Ben PrusinskiHi Frank,
More progress, I exported the ORACLE_SID and tried again but now have new errors! We are trying to load an Excel CSV file into a new table on our Oracle 10g database. I created the new table in Oracle and loaded with SQL*Loader with below problems.
$ export ORACLE_SID=PROD
$ sqlldr 'ldm/password@PROD' control=prod.ctl log=issue.log bad=bad.log discard=discard.log
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: prod.ctl
Data File: prod.csv
Bad File: bad.log
Discard File: discard.log
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table TESTLD, loaded from every logical record.
Insert option in effect for this table: REPLACE
Column Name Position Len Term Encl Datatype
ISSUE_KEY FIRST * , CHARACTER
TIME_DIM_KEY NEXT * , CHARACTER
PRODUCT_CATEGORY_KEY NEXT * , CHARACTER
PRODUCT_KEY NEXT * , CHARACTER
SALES_CHANNEL_DIM_KEY NEXT * , CHARACTER
TIME_OF_DAY_DIM_KEY NEXT * , CHARACTER
ACCOUNT_DIM_KEY NEXT * , CHARACTER
ESN_KEY NEXT * , CHARACTER
DISCOUNT_DIM_KEY NEXT * , CHARACTER
INVOICE_NUMBER NEXT * , CHARACTER
ISSUE_QTY NEXT * , CHARACTER
GROSS_PRICE NEXT * , CHARACTER
DISCOUNT_AMT NEXT * , CHARACTER
NET_PRICE NEXT * , CHARACTER
COST NEXT * , CHARACTER
SALES_GEOGRAPHY_DIM_KEY NEXT * , CHARACTER
value used for ROWS parameter changed from 64 to 62
Record 1: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 2: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 3: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 4: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 5: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 6: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 7: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 8: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 9: Rejected - Error on table ISSUE_FACT_TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 10: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 11: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 12: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 13: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 14: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 15: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 16: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 17: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 18: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 19: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 20: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 21: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 22: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 23: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 24: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 39: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table TESTLD:
0 Rows successfully loaded.
51 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 255936 bytes(62 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 51
Total logical records rejected: 51
Total logical records discarded: 0
Run began on Tue May 23 11:04:28 2006
Run ended on Tue May 23 11:04:28 2006
Elapsed time was: 00:00:00.14
CPU time was: 00:00:00.01
[oracle@casanbdb11 sql_loader]$
Here is the control file:
LOAD DATA
INFILE issue_fact.csv
REPLACE
INTO TABLE TESTLD
FIELDS TERMINATED BY ','
ISSUE_KEY,
TIME_DIM_KEY,
PRODUCT_CATEGORY_KEY,
PRODUCT_KEY,
SALES_CHANNEL_DIM_KEY,
TIME_OF_DAY_DIM_KEY,
ACCOUNT_DIM_KEY,
ESN_KEY,
DISCOUNT_DIM_KEY,
INVOICE_NUMBER,
ISSUE_QTY,
GROSS_PRICE,
DISCOUNT_AMT,
NET_PRICE,
COST,
SALES_GEOGRAPHY_DIM_KEY
) -
Loading "fixed length" text files in UTF8 with SQL*Loader
Hi!
We have a lot of files, we load with SQL*Loader into our database. All Datafiles have fixed length columns, so we use POSITION(pos1, pos2) in the ctl-file. Till now the files were in WE8ISO8859P1 and everything was fine.
Now the source-system generating the files changes to unicode and the files are in UTF8!
The SQL-Loader docu says "The start and end arguments to the POSITION parameter are interpreted in bytes, even if character-length semantics are in use in a datafile....."
As I see this now, there is no way to say "column A starts at "CHARACTER Position pos1" and ends at "Character Position pos2".
I tested with
load data
CHARACTERSET AL32UTF8
LENGTH SEMANTICS CHARACTER
replace ...
in the .ctl file, but when the first character with more than one byte encoding (for example ü ) is in the file, all positions of that record are mixed up.
Is there a way to load these files in UTF8 without changing the file-definition to a column-seperator?
Thanks for any hints - charlyI have not tested this but you should be able to achieve what you want by using LENGTH SEMANTICS CHARACTER and by specifying field lengths (e.g. CHAR(5)) instead of only their positions. You could still use the POSITION(*+n) syntax to skip any separator columns that contain only spaces or tabs.
If the above does not work, an alternative would be to convert all UTF8 files to UTF16 before loading so that they become fixed-width.
-- Sergiusz
Maybe you are looking for
-
Can not run traceroute [SOLVED]
Hello arch world, I decided out of the blue to run a traceroute on my computer to see what I get but the only result I get when running one at any location is this (for yahoo.com: 1 launchmodem (192.168.1.254) 0.483 ms 0.413 ms 0.364 ms 2 * * *
-
Small office- trouble with different Acrobat versions
I work in a small office where we use Adobe Acrobat to create pdf docs on 5 computers. On these 5 desktops, we are using 4 different versions of Acrobat. When users convert excel documents to pdf, the results are different depending on which versio
-
Publish Pages Document using Apple's print service?
I've created a travelogue in pages. 60% text and 40% photos. Is there a way to use Apple's print service (a la iPhoto Books) to publish a few copies? Note: I posted a similar message in the iPhoto forum. Sorry if that breaks any rules of this discuss
-
Imageready CS2 had a fantastic export to swf option that as far as I can tell has completely dissapeared. Can we please get this back? It allowed you to take vector based layers in photoshop (with solid fills) right into flash and maintain full edita
-
Hi, We are having two links 4 mbps and 2 mbps, and we are running OSPF on these lines. How do we load balance on these lines ? As per bandwidth . That is 75 % of data should flow on 4 mb line and 25 5 should flow on 2 mb line. Is such balancing possi