SQL Loader CLOBs
I am moving data from an oracle 10g table to another oracle 10g table, transforming the data in Access, filtering records, merging fields, etc.., then via Access VBA creating CTL files and neccessary scripts, then loading the data via SQL Loader. Getting the CLOBs to load is not a problem, getting it to load with the formatting is. The records in the control file look fine, the formatting is kept just fine in the CTL file, but when it is loaded all the carriage returns, line feeds, etc are removed
Here is the header of my CTL file:
LOAD DATA
INFILE *
CONTINUEIF LAST != "|"
INTO TABLE table_name
APPEND
FIELDS TERMINATED BY '||' TRAILING NULLCOLS
(field1, field2, field3....etc)
begindata
field1||field2||field3||fieldx|
At first i thought taking out the TRAILING NULLCOLS would help but i get the same result.
The end result i am looking for is keeping the data formatted exactly as it is in the source table.
Thanks
Mike
After doing some research it seems that the only way is creating a file with the clob, see http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_loading.htm#i1007627
Control File Contents:LOAD DATA
INFILE 'sample.dat'
INTO TABLE person_table
FIELDS TERMINATED BY ','
(name CHAR(20),
ext_fname FILLER CHAR(40),
"RESUME" LOBFILE(ext_fname) TERMINATED BY EOF)
Datafile (sample.dat):
Johny Quest,jqresume.txt,
Speed Racer,'/private/sracer/srresume.txt',
Secondary Datafile (jqresume.txt):
Johny Quest,479 Johny Quest
500 Oracle Parkway
[email protected]>
HTH
Enrique
Similar Messages
-
SQL Loader, CLOB, delimited fields
Hello.
I have to load using SQL Loader data from csv file into table, which one field is CLOB type.
Here is how ctl file initially looked like:
UNRECOVERABLE
LOAD DATA
INFILE '.\csv_files\TSH_DGRA.csv'
BADFILE '.\bad_files\TSH_DGRA.bad'
DISCARDFILE '.\dsc_files\TSH_DGRA.dsc'
APPEND
INTO TABLE TSH_DGRA
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
ID_OBJ_TSHD,
PR_ZOOM_TSHD,
PR_GRID_TSHD,
PR_ELMGR_TSHD CHAR(4000) OPTIONALLY ENCLOSED BY '<clob>' AND '</clob>',
PR_ALRMGR_TSHD CHAR(4000) OPTIONALLY ENCLOSED BY '<clob>' AND '</clob>'
Problems are fields PR_ELMGR_TSHD and PR_ALRMGR_TSHD (CLOBs in table TSH_DGRA). Until data which should be loaded into CLOB fields are under 4000 characters long, it works fine, but what should I do if I want to load data which are longer than 4000 characters?
If found on Link:[http://download.oracle.com/docs/cd/B14117_01/server.101/b10825/ldr_loading.htm#i1006803] which one sentence said that:
"SQL*Loader defaults to 255 bytes when moving CLOB data, but a value of up to 2 gigabytes can be specified. For a delimited field, if a length is specified, that length is used as a maximum. If no maximum is specified, it defaults to 255 bytes. For a CHAR field that is delimited and is also greater than 255 bytes, you must specify a maximum length. See CHAR for more information about the CHAR datatype."
So, my question is, how to specify "up to 2gb" as text said? I can not use CHAR datatype because it is limited to 4000 characters. And I have to load about 60000 characters. I also can not use technique where all data for every CLOB field are in separate files.Just specify the maximum expected size:
PR_ELMGR_TSHD CHAR(100000) OPTIONALLY ENCLOSED BY '<clob>' AND '</clob>',
PR_ALRMGR_TSHD CHAR(1000000) OPTIONALLY ENCLOSED BY '<clob>' AND '</clob>'
The CHAR(1000000) will allow SQLLDR to handle up to 1000000 bytes of input text. -
Dear buddies,
Please guide me with this.
LOAD DATA
INFILE 'D:\load\dat\Enquiry_reply.dat'
BADFILE 'D:\load\bad\Enquiry_reply.bad'
DISCARDFILE 'D:\load\dat\discard\Enquiry_reply.dsc'
replace INTO TABLE OES_Enquiry FIELDS TERMINATED BY '['
*(Reply CLOBFILE("D:\load\dat\oes_enquiry_reply.dat") TERMINATED BY '[')*
Whats wrong here in this?
I am getting error:
SQLLoader-350: Syntax error at line 6.*
Expecting "," or ")", found "CLOBFILE".
*(Reply CLOBFILE("D:\load\dat\enquiry_reply.dat") TERMINATED BY '['*
Please advice me on how to go about it.
Thanks in advance.
NithThis forum is meant for issues with database upgrades - pl re-post in the SQL*Loader forum (Export/Import/SQL Loader & External Tables
HTH
Srini -
SQL LOADER: how to load CLOB column using stored function
Hi,
I am a newbie of sql loader. Everything seems to be fine until I hit a
road block - the CLOB column type. I want to load data into the clob
column using a stored function. I need to do some manipulation on the
data before it gets saved to that column. But I got this error when I
run the sql loader.
SQL*Loader-309: No SQL string allowed as part of "DATA" field
specification
DATA is my CLOB type column.
here is the content of the control file:
LOAD DATA
INFILE 'test.csv'
BADFILE 'test.bad'
DISCARDFILE 'test.dsc'
REPLACE
INTO TABLE test_table
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
codeid BOUNDFILLER,
reason BOUNDFILLER,
Checkstamp "to_date(:CHECKSTAMP, 'mm/dd/yyyy')",
"DATA" "GetContent(:codeid, :reason)"
All references are suggesting to use a file to load data on
CLOB column but I want to use a function in which it generates
the content to be saved into the column.
Any help is greatly appreciated.
Thanks,
Baldwin
MISICompany*** Duplicate Post ... Please Ignore ***
-
Using clob in sql loader utility in oracle 9i
Hi,
I want to load data into a table with 2 clob columns using a sql loader utility dat file and control file created programatically.
The size of clob in dat file can change and the clob columns are inline in data file.
As per 9i doc the size of clob is 4GB .
How can I change the control file so that it can load max 4 GB data in clob columns .
I am getting error while calling sqlldr using below control file :
SQL*Loader-350: Syntax error at line 13.
Expecting non-negative integer, found "-294967296".
,"NARRATIVE" char(4000000000)
^
control file :
LOAD DATA
INFILE '' "str X'3C213E0A'"
APPEND INTO TABLE PSD_TERM
FIELDS TERMINATED BY '~^'
TRAILING NULLCOLS
"PSD_ID" CHAR(16) NULLIF ("PSD_ID"=BLANKS)
,"PSD_SERIAL_NUM" CHAR(4) NULLIF ("PSD_SERIAL_NUM"=BLANKS)
,"PSD_TERM_COD" CHAR(4) NULLIF ("PSD_TERM_COD"=BLANKS)
,"PSD_TERM_SER_NO" CHAR(4) NULLIF ("PSD_TERM_SER_NO"=BLANKS)
,"VERSION_DT" DATE "DD-MON-YYYY HH:MI:SS AM" NULLIF ("VERSION_DT"=BLANKS)
,"LATEST_VERSION" CHAR(1) NULLIF ("LATEST_VERSION"=BLANKS)
,"NARRATIVE" char(4000000000)
,"PARTITION_DT" DATE "DD-MON-YYYY HH:MI:SS AM" NULLIF ("PARTITION_DT"=BLANKS)
,"NARRATIVE_UNEXPANDED" char(4000000000)
)Yes, you can do it. Create the sequence (suppose you call it "PK_SEQ_X") and then in your control file reference it as "PK_SEQ_X.NEXTVAL". For example suppose you wanted to put it into a column named 'Y' the entry in your control file will look like 'load data insert into table Z (Y "PK_SEQ_X.NEXTVAL", ....)'
Note that the double quotes around the sequence name are required. -
SQL loader Field in data file exceeds maximum length for CLOB column
Hi all
I'm loading data from text file separated by TAB and i got the error below for some lines.
Event the column is CLOB data type is there a limitation of the size of a CLOB data type.
The error is:
Record 74: Rejected - Error on table _TEMP, column DEST.
Field in data file exceeds maximum length
I'm using SQL Loader and the database is oracle 11g r2 on linux Red hat 5
Here are the line causing the error fronm my data file and my table description for test:
create table TEMP
CODE VARCHAR2(100),
DESC VARCHAR2(500),
RATE FLOAT,
INCREASE VARCHAR2(20),
COUNTRY VARCHAR2(500),
DEST CLOB,
WEEK VARCHAR2(10),
IS_SAT VARCHAR2(50),
IS_SUN VARCHAR2(50)
CONTROL FILE:
LOAD DATA
INTO TABLE TEMP
APPEND
FIELDS TERMINATED BY X'9' TRAILING NULLCOLS
CODE,
DESC,
RATE,
INCREASE,
COUNTRY),
DEST,
WEEK,
IS_SAT,
IS_SUN
Data file:
BHS Mobile Bahamas - Mobile 0.1430 1 "242357, 242359, 242375, 242376, 242395, 242421, 242422, 242423, 242424, 242425, 242426, 242427, 242428, 242429, 242431, 242432, 242433, 242434, 242435, 242436, 242437, 242438, 242439, 242441, 242442, 242443, 242445, 242446, 242447, 242448, 242449, 242451, 242452, 242453, 242454, 242455, 242456, 242457, 242458, 242462, 242463, 242464, 242465, 242466, 242467, 242468, 24247, 242524, 242525, 242533, 242535, 242544, 242551, 242552, 242553, 242554, 242556, 242557, 242558, 242559, 242565, 242577, 242636, 242646, 242727"
BOL Mobile ENTEL Bolivia - Mobile Entel 0.0865 Increase 591 "67, 68, 71, 72, 73, 740, 7410, 7411, 7412, 7413, 7414, 7415, 7420, 7421, 7422, 7423, 7424, 7425, 7430, 7431, 7432, 7433, 7434, 7435, 7436, 7437, 7440, 7441, 7442, 7443, 7444, 7445, 7450, 7451, 7452, 7453, 7454, 7455, 746, 7470, 7471, 7472, 7475, 7476, 7477, 7480, 7481, 7482, 7483, 7484, 7485, 7486, 7490, 7491, 7492, 7493, 7494, 7495, 7496" Thank you.Hi
Thank you for youe help, I found the solution and here what i do in my Control file i added
char(40000) OPTIONALLY ENCLOSED BY '"' .
LOAD DATA
INTO TABLE TEMP
APPEND
FIELDS TERMINATED BY X'9' TRAILING NULLCOLS
CODE,
DESC,
RATE,
INCREASE,
COUNTRY,
DEST
char(40000) OPTIONALLY ENCLOSED BY '"',
WEEK,
IS_SAT,
IS_SUN
Thank you for your help. -
Using SQL*Loader to load a .csv file having multiple CLOBs
Oracle 8.1.5 on Solaris 2.6
I want to use SQL*Loader to load a .CSV file that has 4 inline CLOB columns. I shall attempt to give some information about the problem:
1. The CLOBs are not delimited in the field level and could themselves contain commas.
2. I cannot get the data file in any other format.
Can anybody help me out with this? While loading LOB in predetermined size fields, is there a limit on the size?
TIA.
-MuraliThanks for the article link. The article states "...the loader can load only XMLType tables, not columns." Is this still the case with 10g R2? If so, what is the best way to workaround this problem? I am migrating data from a Sybase table that contains a TEXT column (among others) to an Oracle table that contains an XMLType column. How do you recommend I accomplish this task?
- Ron -
Hi all,
I am trying to insert this data into the CLOB column. I am getting the error - Records rejected.
Please help me in this regard.
Control file:
load data
infile 'c:\test.csv'
truncate
into table test_clob
fields terminated by ","
a ,
b
Table Structure:
create table test_clob(a number, b clob);
Test.csv:
1, "Marsh Supermarkets here is the exclusive supermarket partner of Project 18 a program it developed with the Peyton Manning Childrens Hospital and Ball State University to fight childhood obesity.
Project 18 Approved shelf tags identify better-for-you kid-friendly products throughout Marsh stores. Canned fruit frozen entrees vegetables granola bars and salty snacks are among the categories analyzed.
In the granola bar category for instance products can have no more than 35% of calories from total fat and have no more than 10% of calories from saturated and trans fats combined.
Along with shelf-tags at Marsh the initiative includes: Project 18 MVPs local high school students honored for serving as role models for younger children in the community; community wellness events Project 18 walks; a school curriculum; and The Project 18 Mobile Van."
Warm Regards
SwamiHi Swami,
You need to modify your control file like this:
load data
infile 'c:\test.csv'
TRUNCATE
into table test_clob
fields terminated by ","
a ,
b CHAR(4000)
)Reason for specifying CHAR(4000) is as under (from Oracle Documentation)
To load internal LOBs (BLOBs, CLOBs, and NCLOBs) or XML columns from a primary datafile, you can use the following standard SQL*Loader formats:
* Predetermined size fields
* Delimited fields
* Length-value pair fields
For more go here: Link: [http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_loading.htm#i1008564]
Thanks,
Ankur -
Help to optimize file load (clobs involved) via pl/sql
Hello.
I have written a procedure that loads a table from a file, this table has five clob columns and the file has 7024 rows (which means 5*7024 clobs are loaded in the table). Every record in the file ends in "\r\n\r\n" (I mean chr(13)||chr(10)||chr(13)||chr(10))
The complete load of the file has taken around 5 hours, I wonder if the procedure can be optimized as I am new to the dmbs_lob package.
The procuedure is called carga_t_pets_respuestas_emisor and belongs to a package called carga_tablas.
I load the whole file in a temporary clob and then I get every line into another temporary clob (I look for the end of record separator (chr(13)||chr(10)||chr(13)||chr(10)) for each line).
Here I post the whole body of tha package (the code is long because the table has 25 columns, but the treatement is the same for every column):
CREATE OR REPLACE PACKAGE BODY MAPV4_MIGRATION.carga_tablas
AS
NAME: CARGA_TABLAS
PURPOSE:
REVISIONS:
Ver Date Author Description
1.0 07/12/2009 1. Created this package.
PROCEDURE tratar_linea (
p_lin IN CLOB,
contlin IN INTEGER,
p_log IN UTL_FILE.file_type,
msg IN OUT VARCHAR2
IS
v_id_peticion t_peticiones_respuestas_emisor.id_peticion%TYPE;
v_id_transaccion t_peticiones_respuestas_emisor.id_transaccion%TYPE;
v_fecha_recepcion VARCHAR2 (50);
v_cod_certificado t_peticiones_respuestas_emisor.cod_certificado%TYPE;
v_cif_requirente t_peticiones_respuestas_emisor.cif_requirente%TYPE;
v_num_elementos t_peticiones_respuestas_emisor.num_elementos%TYPE;
v_fichero_peticion t_peticiones_respuestas_emisor.fichero_peticion%TYPE;
v_ter t_peticiones_respuestas_emisor.ter%TYPE;
v_fecha_ini_proceso VARCHAR2 (50);
v_fecha_fin_proceso VARCHAR2 (50);
v_fichero_respuesta t_peticiones_respuestas_emisor.fichero_respuesta%TYPE;
v_cn t_peticiones_respuestas_emisor.cn%TYPE;
v_contador_enviados t_peticiones_respuestas_emisor.contador_enviados%TYPE;
v_signature t_peticiones_respuestas_emisor.signature%TYPE;
v_caducidad VARCHAR2 (50);
v_descompuesta t_peticiones_respuestas_emisor.descompuesta%TYPE;
v_estado t_peticiones_respuestas_emisor.estado%TYPE;
v_error t_peticiones_respuestas_emisor.error%TYPE;
v_cod_municipio_volante t_peticiones_respuestas_emisor.cod_municipio_volante%TYPE;
v_peticion_solicitud_respuesta t_peticiones_respuestas_emisor.peticion_solicitud_respuesta%TYPE;
v_id_fabricante_ve t_peticiones_respuestas_emisor.id_fabricante_ve%TYPE;
v_fecha_respuesta VARCHAR2 (50);
v_codigo_error t_peticiones_respuestas_emisor.codigo_error%TYPE;
v_serial t_peticiones_respuestas_emisor.serial%TYPE;
v_inicio_col INTEGER;
v_fin_col INTEGER;
v_timestamp_format VARCHAR2 (50)
:= 'yyyy-mm-dd hh24:mi:ss';
BEGIN
UTL_FILE.put_line (p_log, 'INICIO tratar_linea');
-- Columna ID_PETICION
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', 1, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', 1, 1) - 1;
UTL_FILE.put_line (p_log,
'v_inicio_col: '
|| v_inicio_col
|| '; v_fin_col: '
|| v_fin_col
UTL_FILE.fflush (p_log);
v_id_peticion :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_id_peticion: ' || v_id_peticion);
UTL_FILE.fflush (p_log);
-- Columna ID_TRANSACCION
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_id_transaccion :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_id_transaccion: ' || v_id_transaccion);
UTL_FILE.fflush (p_log);
-- Columna FECHA_RECEPCION
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fecha_recepcion :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_fecha_recepcion: ' || v_fecha_recepcion);
UTL_FILE.fflush (p_log);
-- Columna COD_CERTIFICADO
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_cod_certificado :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_cod_certificado: ' || v_cod_certificado);
UTL_FILE.fflush (p_log);
-- Columna CIF_REQUIRENTE
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_cif_requirente :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_cif_requirente: ' || v_cif_requirente);
UTL_FILE.fflush (p_log);
-- Columna NUM_ELEMENTOS
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_num_elementos :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_num_elementos: ' || v_num_elementos);
UTL_FILE.fflush (p_log);
-- Columna FICHERO_PETICION
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fichero_peticion :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'Longitud v_fichero_peticion: '
|| DBMS_LOB.getlength (v_fichero_peticion)
UTL_FILE.fflush (p_log);
--UTL_FILE.put_line (p_log, 'v_fichero_peticion: ' || v_fichero_peticion);
-- Columna TER
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_ter :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_ter: ' || v_ter);
UTL_FILE.fflush (p_log);
-- Columna FECHA_INI_PROCESO
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fecha_ini_proceso :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'v_fecha_ini_proceso: ' || v_fecha_ini_proceso);
UTL_FILE.fflush (p_log);
-- Columna FECHA_FIN_PROCESO
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fecha_fin_proceso :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'v_fecha_fin_proceso: ' || v_fecha_fin_proceso);
UTL_FILE.fflush (p_log);
-- Columna FICHERO_RESPUESTA
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fichero_respuesta :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'Longitud v_fichero_respuesta: '
|| DBMS_LOB.getlength (v_fichero_respuesta)
UTL_FILE.fflush (p_log);
--UTL_FILE.put_line (p_log,
-- 'v_fichero_respuesta: ' || v_fichero_respuesta);
-- Columna CN
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_cn :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_cn: ' || v_cn);
UTL_FILE.fflush (p_log);
-- Columna CONTADOR_ENVIADOS
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_contador_enviados :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'v_CONTADOR_ENVIADOS: ' || v_contador_enviados);
UTL_FILE.fflush (p_log);
-- Columna SIGNATURE
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_signature :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'Longitud v_signature: '
|| DBMS_LOB.getlength (v_signature)
UTL_FILE.fflush (p_log);
--UTL_FILE.put_line (p_log, 'v_SIGNATURE: ' || v_signature);
-- Columna CADUCIDAD
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_caducidad :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_CADUCIDAD: ' || v_caducidad);
UTL_FILE.fflush (p_log);
-- Columna DESCOMPUESTA
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_descompuesta :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_DESCOMPUESTA: ' || v_descompuesta);
UTL_FILE.fflush (p_log);
-- Columna ESTADO
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_estado :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_ESTADO: ' || v_estado);
UTL_FILE.fflush (p_log);
-- Columna ERROR
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_error :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'Longitud v_ERROR: ' || DBMS_LOB.getlength (v_error)
UTL_FILE.fflush (p_log);
--UTL_FILE.put_line (p_log, 'v_ERROR: ' || v_error);
-- Columna COD_MUNICIPIO_VOLANTE
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_cod_municipio_volante :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'v_COD_MUNICIPIO_VOLANTE: '
|| v_cod_municipio_volante
UTL_FILE.fflush (p_log);
-- Columna PETICION_SOLICITUD_RESPUESTA
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_peticion_solicitud_respuesta :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log,
'Longitud v_PETICION_SOLICITUD_RESPUESTA: '
|| DBMS_LOB.getlength (v_peticion_solicitud_respuesta)
UTL_FILE.fflush (p_log);
--UTL_FILE.put_line (p_log,
-- 'v_PETICION_SOLICITUD_RESPUESTA: '
-- || v_peticion_solicitud_respuesta
-- Columna ID_FABRICANTE_VE
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_id_fabricante_ve :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_ID_FABRICANTE_VE: ' || v_id_fabricante_ve);
UTL_FILE.fflush (p_log);
-- Columna FECHA_RESPUESTA
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_fecha_respuesta :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_FECHA_RESPUESTA: ' || v_fecha_respuesta);
UTL_FILE.fflush (p_log);
-- Columna CODIGO_ERROR
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_codigo_error :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_CODIGO_ERROR: ' || v_codigo_error);
-- Columna SERIAL
UTL_FILE.fflush (p_log);
v_inicio_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 1) + 1;
--La ultima columna puede no llevar '",' sino '"'
--v_fin_col := DBMS_LOB.INSTR (p_lin, '",', v_fin_col + 2, 1) - 1;
v_fin_col := DBMS_LOB.INSTR (p_lin, '"', v_fin_col + 2, 2) - 1;
--DBMS_OUTPUT.put_line ( 'v_inicio_col: '
-- || v_inicio_col
-- || '; v_fin_col: '
-- || v_fin_col
v_serial :=
DBMS_LOB.SUBSTR (p_lin, v_fin_col - v_inicio_col + 1, v_inicio_col);
UTL_FILE.put_line (p_log, 'v_SERIAL: ' || v_serial);
UTL_FILE.fflush (p_log);
-- Insercion en tabla
INSERT INTO t_histo_ptcions_rspstas_emisor
(id_peticion, id_transaccion,
fecha_recepcion,
cod_certificado, cif_requirente, num_elementos,
fichero_peticion, ter,
fecha_ini_proceso,
fecha_fin_proceso,
fichero_respuesta, cn, contador_enviados,
signature,
caducidad,
descompuesta, estado, error,
cod_municipio_volante, peticion_solicitud_respuesta,
id_fabricante_ve,
fecha_respuesta,
codigo_error, serial
VALUES (v_id_peticion, v_id_transaccion,
TO_TIMESTAMP (v_fecha_recepcion, v_timestamp_format),
v_cod_certificado, v_cif_requirente, v_num_elementos,
v_fichero_peticion, v_ter,
TO_TIMESTAMP (v_fecha_ini_proceso, v_timestamp_format),
TO_TIMESTAMP (v_fecha_fin_proceso, v_timestamp_format),
v_fichero_respuesta, v_cn, v_contador_enviados,
v_signature,
TO_TIMESTAMP (v_caducidad, v_timestamp_format),
v_descompuesta, v_estado, v_error,
v_cod_municipio_volante, v_peticion_solicitud_respuesta,
v_id_fabricante_ve,
TO_TIMESTAMP (v_fecha_respuesta, v_timestamp_format),
v_codigo_error, v_serial
-- Se valida la transaccion
--COMMIT;
UTL_FILE.put_line (p_log, 'FIN tratar_linea');
EXCEPTION
WHEN OTHERS
THEN
msg := SQLERRM;
--ROLLBACK;
RAISE;
END;
PROCEDURE carga_t_pets_respuestas_emisor (
p_dir IN VARCHAR2,
p_filename IN VARCHAR2,
p_os IN VARCHAR2,
msg OUT VARCHAR2
IS
no_fin_linea EXCEPTION;
v_srcfile BFILE;
v_tmpclob CLOB;
v_warning INTEGER;
v_dest_offset INTEGER := 1;
v_src_offset INTEGER := 1;
v_lang INTEGER := 0;
posfinlinea NUMBER;
posiniciolinea NUMBER;
cad VARCHAR2 (30000);
v_lin CLOB;
v_tamfich NUMBER;
contlin INTEGER := 0;
v_log UTL_FILE.file_type;
BEGIN
msg := NULL;
-- abrimos el log
v_log :=
UTL_FILE.fopen (p_dir, 'carga_t_pets_respuestas_emisor.log', 'w');
UTL_FILE.put_line (v_log, 'INICIO carga_t_pets_respuestas_emisor');
UTL_FILE.fflush (v_log);
v_srcfile := BFILENAME (p_dir, p_filename);
DBMS_LOB.createtemporary (v_tmpclob, TRUE, DBMS_LOB.CALL);
DBMS_LOB.OPEN (v_srcfile);
DBMS_LOB.loadclobfromfile (dest_lob => v_tmpclob,
src_bfile => v_srcfile,
amount => DBMS_LOB.lobmaxsize,
dest_offset => v_dest_offset,
src_offset => v_src_offset,
bfile_csid => 0,
lang_context => v_lang,
warning => v_warning
-- Una vez cargado el CLOB se puede cerrar el BFILE
DBMS_LOB.CLOSE (v_srcfile);
v_tamfich := DBMS_LOB.getlength (v_tmpclob);
--DBMS_OUTPUT.put_line ('v_tamfich: ' || v_tamfich);
UTL_FILE.put_line (v_log, 'v_tamfich: ' || v_tamfich);
UTL_FILE.fflush (v_log);
posfinlinea := 0;
posiniciolinea := posfinlinea + 1;
-- Despreciamos el último fin de linea (tiene que existir)
IF (UPPER (p_os) = 'WINDOWS')
THEN
v_tamfich := v_tamfich - 4;
ELSE
v_tamfich := v_tamfich - 2;
END IF;
contlin := 1;
WHILE (v_tamfich + 1 - posfinlinea > 0)
LOOP
--contlin := contlin + 1;
IF (UPPER (p_os) = 'WINDOWS')
THEN
posfinlinea :=
DBMS_LOB.INSTR (v_tmpclob,
CHR (13) || CHR (10) || CHR (13) || CHR (10),
posiniciolinea
ELSE
posfinlinea :=
DBMS_LOB.INSTR (v_tmpclob, CHR (13) || CHR (13),
posiniciolinea);
END IF;
IF (posfinlinea = 0)
THEN
RAISE no_fin_linea;
END IF;
--DBMS_OUTPUT.put_line ('posfinlinea: ' || posfinlinea);
UTL_FILE.put_line (v_log, 'posfinlinea: ' || posfinlinea);
UTL_FILE.fflush (v_log);
IF (DBMS_LOB.getlength (v_lin) != 0)
THEN
DBMS_LOB.freetemporary (v_lin);
UTL_FILE.put_line (v_log,
'Se ha reinicializado el temporary clob v_lin'
UTL_FILE.fflush (v_log);
END IF;
DBMS_LOB.createtemporary (v_lin, TRUE, DBMS_LOB.CALL);
DBMS_LOB.COPY (dest_lob => v_lin,
src_lob => v_tmpclob,
amount => posfinlinea,
dest_offset => 1,
src_offset => posiniciolinea
-- Tratamos la linea
--DBMS_OUTPUT.put_line
UTL_FILE.put_line
(v_log,
UTL_FILE.fflush (v_log);
tratar_linea (v_lin, contlin, v_log, msg);
posiniciolinea := posfinlinea + 1;
contlin := contlin + 1;
--DBMS_OUTPUT.put_line ('posiniciolinea: ' || posiniciolinea);
UTL_FILE.put_line (v_log, 'posiniciolinea: ' || posiniciolinea);
UTL_FILE.fflush (v_log);
END LOOP;
-- Se valida la transaccion
COMMIT;
--Se cierra el fichero
--DBMS_LOB.CLOSE (v_srcfile);
DBMS_LOB.freetemporary (v_tmpclob);
UTL_FILE.put_line (v_log, 'FIN carga_t_pets_respuestas_emisor');
UTL_FILE.fclose (v_log);
EXCEPTION
WHEN no_fin_linea
THEN
ROLLBACK;
msg := 'Fichero mal formateado, no se encuentra el fin de linea';
IF (DBMS_LOB.ISOPEN (v_srcfile) = 1)
THEN
DBMS_LOB.fileclose (v_srcfile);
END IF;
IF UTL_FILE.is_open (v_log)
THEN
UTL_FILE.fclose (v_log);
END IF;
WHEN OTHERS
THEN
ROLLBACK;
msg := SQLERRM;
IF (DBMS_LOB.ISOPEN (v_srcfile) = 1)
THEN
DBMS_LOB.fileclose (v_srcfile);
END IF;
IF UTL_FILE.is_open (v_log)
THEN
UTL_FILE.fclose (v_log);
END IF;
END carga_t_pets_respuestas_emisor;
END carga_tablas;
/Thanks in advance.Thank you again for answering.
I am reading from the file in the procedure carga_t_pets_respuestas_emisor I posted above (the procedure is able to load the whole file in the table but takes very long).
What I am doing is loading the whole file in a BFILE and then loading it in a CLOB variable using loadclobfromfile (it is not taking very long because I can see that early in the log):
v_srcfile := BFILENAME (p_dir, p_filename);
DBMS_LOB.createtemporary (v_tmpclob, TRUE, DBMS_LOB.CALL);
DBMS_LOB.OPEN (v_srcfile);
DBMS_LOB.loadclobfromfile (dest_lob => v_tmpclob,
src_bfile => v_srcfile,
amount => DBMS_LOB.lobmaxsize,
dest_offset => v_dest_offset,
src_offset => v_src_offset,
bfile_csid => 0,
lang_context => v_lang,
warning => v_warning
);Then I read in a loop all the records (I cannot call them lines because the CLOB columns can contain line feeds) from the file and load them it in another CLOB variable:
WHILE (v_tamfich + 1 - posfinlinea > 0)
LOOP
--contlin := contlin + 1;
IF (UPPER (p_os) = 'WINDOWS')
THEN
posfinlinea :=
DBMS_LOB.INSTR (v_tmpclob,
CHR (13) || CHR (10) || CHR (13) || CHR (10),
posiniciolinea
ELSE
posfinlinea :=
DBMS_LOB.INSTR (v_tmpclob, CHR (13) || CHR (13),
posiniciolinea);
END IF;
IF (posfinlinea = 0)
THEN
RAISE no_fin_linea;
END IF;
--DBMS_OUTPUT.put_line ('posfinlinea: ' || posfinlinea);
UTL_FILE.put_line (v_log, 'posfinlinea: ' || posfinlinea);
UTL_FILE.fflush (v_log);
IF (DBMS_LOB.getlength (v_lin) != 0)
THEN
DBMS_LOB.freetemporary (v_lin);
UTL_FILE.put_line (v_log,
'Se ha reinicializado el temporary clob v_lin'
UTL_FILE.fflush (v_log);
END IF;
DBMS_LOB.createtemporary (v_lin, TRUE, DBMS_LOB.CALL);
DBMS_LOB.COPY (dest_lob => v_lin,
src_lob => v_tmpclob,
amount => posfinlinea,
dest_offset => 1,
src_offset => posiniciolinea
-- Tratamos la linea
--DBMS_OUTPUT.put_line
UTL_FILE.put_line
(v_log,
UTL_FILE.fflush (v_log);
tratar_linea (v_lin, contlin, v_log, msg);
posiniciolinea := posfinlinea + 1;
contlin := contlin + 1;
--DBMS_OUTPUT.put_line ('posiniciolinea: ' || posiniciolinea);
UTL_FILE.put_line (v_log, 'posiniciolinea: ' || posiniciolinea);
UTL_FILE.fflush (v_log);
END LOOP;(I have also loaded the whole file using sql*loader, but I have asked to try to do it with pl/sql)
Thanks again and regards. -
Create sql loader data file dynamically
Hi,
I want a sample program/approach which is used to create a sql loader data file.
The program will read table name as i/p and will use
select stmt will column list derived from user_tab_columns from data dictionary
assuming multiple clob columns in the column list.
Thanks
ManojI 'm writing clob and other columns to a sql loader dat file.
Below sample code for writing clob column is giving file write error.
How can I write multiple clobs to dat file so that control file will handle it correctly
offset NUMBER := 1;
chunk VARCHAR2(32000);
chunk_size NUMBER := 32000;
WHILE( offset < dbms_lob.getlength(l_rec_type.narrative) )
LOOP
chunk := dbms_lob.substr(l_rec_type.narrative, chunk_size, offset );
utl_file.put( l_file_handle, chunk );
utl_file.fflush(l_file_handle);
offset := offset + chunk_size;
END LOOP;
utl_file.new_line(l_file_handle); -
How to call sql loader ctrl file with in the pl/sql procedure
Hi Friends,
I am doing a project related with the transferring data using queues. In the queue , I will get a tab delimited data in the form of CLOB variable/message. I do want to store that dat in the oracle table.
When updating data into the table ,
1. Don't want to write that data into a file.( Want to access directly after dequeueing from the specfic queue).
2. As the data is in tab delimited form, I want to use sql loader concept.
How do I call the sql loader ctrl file with in my pl/sql procedure. When I searched , most of the forums recommending external procedure or Java program.
Please Guide me on this issue. my preferrence is pl sql, But don't know about external procedure . If no other way , I will try Java.
I am using oracle 9.2.0.8.0.
Thanks in advance,
Vimal..Neither SQL*Loader nor external tables are designed to read data from a CLOB stored in the database. They both work on files stored on the file system. If you don't want the data to be written to a file, you're going to have to roll your own parsing code. This is certainly possible. But it is going to be far less efficient than either SQL*Loader or external tables. And it's likely to involve quite a bit more code.
The simplest possible thing that could work would be to use something like Tom Kyte's string tokenization package to read a line from the CLOB, break it into the component pieces, and then store the different tokens in a meaningful collection (i.e. an object type or a record type that corresponds to the table definition). Of course, you'll need to handle things like converting strings to numbers or dates, rejecting rows, writing log files, etc.
Justin -
Using sqlldr to load CLOB data from DB2
I am stuck trying to resolve this problem. I am migrating data from DB2 to Oracle. I used DB2 export to extract the data specifying lobsinfile clause. This created all the CLOB data in one file. So a typical record has a column with a reference to the CLOB data. "OUTFILE.001.lob.0.2880/". where OUTFILE.001.lob is the name specified in the export command and 0 is the starting position in the file and 2880 is the length of the first CLOB.
When I try to load this data using sqlldr I'm getting a file not found.
The control file looks something like this:
clob_1 FILLER char(100),
"DETAILS" LOBFILE(clob_1) TERMINATED BY EOF,
I'm using Oracle 11gR2 and DB2 9.7.5
Your help is appreciated.OK..here are additional details. Some names have changed but the idea is the same. Also the sqlldr is executing in the same directory as the data files and the control file
Primary data file is VOIPCACHE.dat Secondary datafile (file with lob data) is VOIPCACHE.001.lob
Control Fileload data
infile 'VOIPCACHE.dat'
badfile 'VOIPCACHE.bad'
discardfile 'VOIPCACHE.dsc'
replace into table VOIPCACHE
fields terminated by ',' optionally enclosed by '"' TRAILING NULLCOLS
(KEY1 "rtrim(:KEY1)",
FIELD8,
clob_1 FILLER char (100),
"DATA" LOBFILE(clob_1) TERMINATED BY EOF)
Snippet from Log file
IELD7 NEXT * , O(") CHARACTER
FIELD8 NEXT * , O(") CHARACTER
CLOB_1 NEXT 100 , O(") CHARACTER
(FILLER FIELD)
"DATA" DERIVED * EOF CHARACTER
Dynamic LOBFILE. Filename in field CLOB_1
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.0.0/' for field "DATA" table VOIPCACHE
SQL*Loader-553: file not found
SQL*Loader-509: System error: No such file or directory
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.0.47/' for field "DATA" table VOIPCACHE
SQL*Loader-553: file not found
SQL*Loader-509: System error: No such file or directory
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.47.47/' for field "DATA" table VOIPCACHE
SQL*Loader-553: file not found
SQL*Loader-509: System error: No such file or directory
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.94.58/' for field "DATA" table VOIPCACHE
SQL*Loader-553: file not found
SQL*Loader-509: System error: No such file or directory
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.152.58/' for field "DATA" table VOIPCACHE
SQL*Loader-553: file not found
SQL*Loader-509: System error: No such file or directory
SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.210.206/' for field "DATA" table VOIPCACHE
This is repeated for each record
sqlldr command
sqlldr userid=${SCHEMA}/${PASSWD}@$ORACLE_SID control=${CTLDIR}/${tbl}.ctl log=${LOGDIR}/${tbl}.log direct=true errors=50
I dont think the variables are important here
-EC -
SQL*Loader or external table for load a MSG (email) file
Hi there!
I'm looking for a way to load an email in a Oracle DB.
I mean, not all the email's body in a column, but to "parse" it in a multi column/table fashion.
Is it possible to do with a sql*loader script or an external table?
I think it is not possible, and that I must switch to XML DB.
Any idea?
Thanks,
AntonioHello,
Why don't you just load the entire MSG (email) as clob into one email_body column or whatever column name you want to use.
To load data upto 32k, you can use varchar2(32656) but its not a good idea to load clob in that manner because it's very inconsistent as length can
vary resulting in string literal too long. So you have 2 choices now, first you have to use either procedure or anonymous block to load clob data.
First Method -- I loaded alert.log successfully and you can imagine how big this file can be (5MB in my test case)
CREATE OR REPLACE DIRECTORY DIR AS '/mydirectory/logs';
DECLARE
clob_data CLOB;
clob_file BFILE;
BEGIN
INSERT INTO t1clob
VALUES (EMPTY_CLOB ())
RETURNING clob_text INTO clob_data;
clob_file := BFILENAME ('DIR', 'wwalert_dss.log');
DBMS_LOB.fileopen (clob_file);
DBMS_LOB.loadfromfile (clob_data,
clob_file,
DBMS_LOB.getlength (clob_file)
DBMS_LOB.fileclose (clob_file);
COMMIT;
END;Second Method: Use of Sqlldr
Example of controlfile
LOAD DATA
INFILE alert.log "STR '|\n'"
REPLACE INTO table t1clob
clob_text char(30000000)
)Hope this helps -
If I generate loader / Insert script from Raptor, it's not working for Clob columns.
I am getting error:
SQL*Loader-510: Physical record in data file (clob_table.ldr) is long
er than the maximum(1048576)
What's the solution?
Regards,Hi,
Has the file been somehow changed by copying it between windows and unix? Ora file transfer done as binary or ASCII? The most common cause of your problem. Is if the end of line carriage return characters have been changed so they are no longer /n/r could this have happened? Can you open the file in a good editor or do an od command in unix to see what is actually present?
Regards,
Harry
http://dbaharrison.blogspot.co.uk/ -
I have used OMWB "generate Sql Loader script " option. ANd received the sql loader error.
The previous attempt to use OMWB online loading has generated garbage data. The picture was not matching with the person id.
Table in Sql Server..................
CREATE TABLE [nilesh] (
[LargeObjectID] [int] NOT NULL ,
[LargeObject] [image] NULL ,
[ContentType] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LargeObjectName] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LargeObjectExtension] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LargeObjectDescription] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LargeObjectSize] [int] NULL ,
[VersionControl] [bit] NULL ,
[WhenLargeObjectLocked] [datetime] NULL ,
[WhoLargeObjectLocked] [char] (11) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LargeObjectTimeStamp] [timestamp] NOT NULL ,
[LargeObjectOID] [uniqueidentifier] NOT NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Table in Oracle..............
CREATE TABLE LARGEOBJECT
LARGEOBJECTID NUMBER(10) NOT NULL,
LARGEOBJECT BLOB,
CONTENTTYPE VARCHAR2(40 BYTE),
LARGEOBJECTNAME VARCHAR2(255 BYTE),
LARGEOBJECTEXTENSION VARCHAR2(10 BYTE),
LARGEOBJECTDESCRIPTION VARCHAR2(255 BYTE),
LARGEOBJECTSIZE NUMBER(10),
VERSIONCONTROL NUMBER(1),
WHENLARGEOBJECTLOCKED DATE,
WHOLARGEOBJECTLOCKED CHAR(11 BYTE),
LARGEOBJECTTIMESTAMP NUMBER(8) NOT NULL,
LARGEOBJECTOID RAW(16) NOT NULL
TABLESPACE USERS
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
LOB (LARGEOBJECT) STORE AS
( TABLESPACE USERS
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL
MONITORING;
Sql Loader script....
SET NLS_DATE_FORMAT=Mon dd YYYY HH:mi:ssAM
REM SET NLS_TIMESTAMP_FORMAT=Mon dd YYYY HH:mi:ss:ffAM
REM SET NLS_LANGUAGE=AL32UTF8
sqlldr cecildata/@ceciltst control=LARGEOBJECT.ctl log=LARGEOBJECT.log
Sql loader control file......
load data
infile 'nilesh.dat' "str '<er>'"
into table LARGEOBJECT
fields terminated by '<ec>'
trailing nullcols
(LARGEOBJECTID,
LARGEOBJECT CHAR(2000000) "HEXTORAW (:LARGEOBJECT)",
CONTENTTYPE "DECODE(:CONTENTTYPE, CHR(00), ' ', :CONTENTTYPE)",
LARGEOBJECTNAME CHAR(255) "DECODE(:LARGEOBJECTNAME, CHR(00), ' ', :LARGEOBJECTNAME)",
LARGEOBJECTEXTENSION "DECODE(:LARGEOBJECTEXTENSION, CHR(00), ' ', :LARGEOBJECTEXTENSION)",
LARGEOBJECTDESCRIPTION CHAR(255) "DECODE(:LARGEOBJECTDESCRIPTION, CHR(00), ' ', :LARGEOBJECTDESCRIPTION)",
LARGEOBJECTSIZE,
VERSIONCONTROL,
WHENLARGEOBJECTLOCKED,
WHOLARGEOBJECTLOCKED,
LARGEOBJECTTIMESTAMP,
LARGEOBJECTOID "GUID_MOVER(:LARGEOBJECTOID)")
Error Received...
Column Name Position Len Term Encl Datatype
LARGEOBJECTID FIRST * CHARACTER
Terminator string : '<ec>'
LARGEOBJECT NEXT ***** CHARACTER
Maximum field length is 2000000
Terminator string : '<ec>'
SQL string for column : "HEXTORAW (:LARGEOBJECT)"
CONTENTTYPE NEXT * CHARACTER
Terminator string : '<ec>'
SQL string for column : "DECODE(:CONTENTTYPE, CHR(00), ' ', :CONTENTTYPE)"
LARGEOBJECTNAME NEXT 255 CHARACTER
Terminator string : '<ec>'
SQL string for column : "DECODE(:LARGEOBJECTNAME, CHR(00), ' ', :LARGEOBJECTNAME)"
LARGEOBJECTEXTENSION NEXT * CHARACTER
Terminator string : '<ec>'
SQL string for column : "DECODE(:LARGEOBJECTEXTENSION, CHR(00), ' ', :LARGEOBJECTEXTENSION)"
LARGEOBJECTDESCRIPTION NEXT 255 CHARACTER
Terminator string : '<ec>'
SQL string for column : "DECODE(:LARGEOBJECTDESCRIPTION, CHR(00), ' ', :LARGEOBJECTDESCRIPTION)"
LARGEOBJECTSIZE NEXT * CHARACTER
Terminator string : '<ec>'
VERSIONCONTROL NEXT * CHARACTER
Terminator string : '<ec>'
WHENLARGEOBJECTLOCKED NEXT * CHARACTER
Terminator string : '<ec>'
WHOLARGEOBJECTLOCKED NEXT * CHARACTER
Terminator string : '<ec>'
LARGEOBJECTTIMESTAMP NEXT * CHARACTER
Terminator string : '<ec>'
LARGEOBJECTOID NEXT * CHARACTER
Terminator string : '<ec>'
SQL string for column : "GUID_MOVER(:LARGEOBJECTOID)"
SQL*Loader-309: No SQL string allowed as part of LARGEOBJECT field specification
what's the cause ?The previous attempt to use OMWB online loading has generated garbage data. The picture was not matching with the person id.This is being worked on (bug4119713) If you have a reproducible testcase please send it in (small testcases seem to work ok).
I have the following email about BLOBS I could forward to you if I have your email address:
[The forum may cut the lines in the wrong places]
Regards,
Turloch
Oracle Migration Workbench Team
Hi,
This may provide the solution. Without having the customer files here I can only guess at the problem. But this should help.
This email outlines a BLOB data move.
There are quiet a few steps to complete the task of moving a large BLOB into the Oracle database.
Normally this wouldn't be a problem, but as far as we can tell SQL Server's (and possibly Sybase) BCP does not reliably export binary data.
The only way to export binary data properly via BCP is to export it in a HEX format.
Once in a HEX format it is difficult to get it back to binary during a data load into Oracle.
We have come up with the idea of getting the HEX values into Oracle by saving them in a CLOB (holds text) column.
We then convert the HEX values to binary values and insert them into the BLOB column.
The problem here is that the HEXTORAW function in Oracle only converts a maximum of 2000 HEX pairs.
We over came this problem by writing our own procedure that will convert (bit by bit) your HEX data to binary.
NOTE: YOU MUST MODIFY THE START.SQL AND FINISH.SQL TO SUIT YOUR CUSTOMER
The task is split into 4 sub tasks
1) CREATE A TABLESPACE TO HOLD ALL THE LOB DATA
--log into your system schema and create a tablespace
--Create a new tablespace for the CLOB and BLOB column (this may take a while to create)
--You may resize this to fit your data ,
--but I believe you have in excess of 500MB of data in this table and we are going to save it twice (in a clob then a blob)
--Note: This script may take some time to execute as it has to create a tablespace of 1000Mb.
-- Change this to suit your customer.
-- You can change this if you want depending on the size of your data
-- Remember that we save the data once as CLOB and then as BLOB
create tablespace lob_tablespace datafile 'lob_tablespace' SIZE 1000M AUTOEXTEND ON NEXT 50M;
LOG INTO YOUR TABLE SCHEMA IN ORACLE
--Modify this script to fit your requirements
2) START.SQL (this script will do the following tasks)
a) Modify your current schema so that it can accept HEX data
b) Modify your current schema so that it can hold that huge amount of data.
The new tablespace is used; you may want to alter this to your requirements
c) Disable triggers, indexes & primary keys on tblfiles
3)DATA MOVE
The data move now involves moving the HEX data in the .dat files to a CLOB.
The START.SQL script adds a new column to <tablename> called <blob_column>_CLOB.
This is where the HEX values will be stored.
MODIFY YOUR CONTROL FILE TO LOOK LIKE THISload data
infile '<tablename>.dat' "str '<er>'"
into table <tablename>
fields terminated by '<ec>'
trailing nullcols
<blob_column>_CLOB CHAR(200000000),
The important part being "_CLOB" appended to your BLOB column name and the datatype set to CHAR(200000000)
RUN sql_loader_script.bat
log into your schema to check if the data was loaded successfully-- now you can see that the hex values were sent to the CLOB column
SQL> select dbms_lob.getlength(<blob_column>),dbms_lob.getlength(<blob_column>_clob) from <tablename>;
LOG INTO YOUR SCHEMA
4)FINISH.SQL (this script will do the following tasks)
a) Creates the procedure needed to perform the CLOB to BLOB transformation
b) Executes the procedure (this may take some time a 500Mb has to be converted to BLOB)
c) Alters the table back to its original form (removes the <blob_column>_clob)
b) Enables the triggers, indexes and primary keys
Regards,
(NAME)
-- START.SQL
-- Modify this for your particular customer
-- This should be executed in the user schema in Oracle that contains the table.
-- DESCRIPTION:
-- ALTERS THE OFFENDING TABLE SO THAT THE DATA MOVE CAN BE EXECUTED
-- DISABLES TRIGGERS, INDEXES AND SEQUENCES ON THE OFFENDING TABLE
-- 1) Add an extra column to hold the hex string
alter table <tablename> add (FILEBINARY_CLOB CLOB);
-- 2) Allow the BLOB column to accpet NULLS
alter table <tablename> MODIFY FILEBINARY NULL;
-- 3) Dissable triggers and sequences on tblfiles
alter trigger <triggername> disable;
alter table tblfiles drop primary key cascade;
drop index <indexname>;
-- 4) Allow the table to use the tablespace
alter table <tablename> move lob (<blob_column>) store as (tablespace lob_tablespace);
alter table tblfiles move lob (<blob_column>clob) store as (tablespace lobtablespace);
COMMIT;
-- END OF FILE
-- FINISH.SQL
-- Modify this for your particular customer
-- This should be executed in the table schema in Oracle.
-- DESCRIPTION:
-- MOVES THE DATA FROM CLOB TO BLOB
-- MODIFIES THE TABLE BACK TO ITS ORIGIONAL SPEC (without a clob)
-- THEN ENABLES THE SEQUENCES, TRIGGERS AND INDEXES AGAIN
-- Currently we have the hex values saved as text in the <columnname>_CLOB column
-- And we have NULL in all rows for the <columnname> column.
-- We have to get BLOB locators for each row in the BLOB column
-- put empty blobs in the blob column
UPDATE <tablename> SET filebinary=EMPTY_BLOB();
COMMIT;
-- create the following procedure in your table schema
CREATE OR REPLACE PROCEDURE CLOBTOBLOB
AS
inputLength NUMBER; -- size of input CLOB
offSet NUMBER := 1;
pieceMaxSize NUMBER := 50; -- the max size of each peice
piece VARCHAR2(50); -- these pieces will make up the entire CLOB
currentPlace NUMBER := 1; -- this is where were up to in the CLOB
blobLoc BLOB; -- blob locator in the table
clobLoc CLOB; -- clob locator pointsthis is the value from the dat file
-- THIS HAS TO BE CHANGED FOR SPECIFIC CUSTOMER TABLE AND COLUMN NAMES
CURSOR cur IS SELECT <blob_column>clob clobcolumn , <blob_column> blob_column FROM /*table*/<tablename> FOR UPDATE;
cur_rec cur%ROWTYPE;
BEGIN
OPEN cur;
FETCH cur INTO cur_rec;
WHILE cur%FOUND
LOOP
--RETRIVE THE clobLoc and blobLoc
clobLoc := cur_rec.clob_column;
blobLoc := cur_rec.blob_column;
currentPlace := 1; -- reset evertime
-- find the lenght of the clob
inputLength := DBMS_LOB.getLength(clobLoc);
-- loop through each peice
LOOP
-- get the next piece and add it to the clob
piece := DBMS_LOB.subStr(clobLoc,pieceMaxSize,currentPlace);
-- append this peice to the BLOB
DBMS_LOB.WRITEAPPEND(blobLoc, LENGTH(piece)/2, HEXTORAW(piece));
currentPlace := currentPlace + pieceMaxSize ;
EXIT WHEN inputLength < currentplace;
END LOOP;
FETCH cur INTO cur_rec;
END LOOP;
END CLOBtoBLOB;
-- now run the procedure
-- It will update the blob column witht the correct binary represntation of the clob column
EXEC CLOBtoBLOB;
-- drop the extra clob cloumn
alter table <tablename> drop column <blob_column>_clob;
-- 2) apply the constraint we removed during the data load
alter table <tablename> MODIFY FILEBINARY NOT NULL;
-- Now re enable the triggers,indexs and primary keys
alter trigger <triggername> enable;
ALTER TABLE TBLFILES ADD ( CONSTRAINT <pkname> PRIMARY KEY ( <column>) ) ;
CREATE INDEX <index_name> ON TBLFILES ( <column> );
COMMIT;
-- END OF FILE
Maybe you are looking for
-
Hi I'm just wondering does any one know why the email to reset ur password hasn't came to my email address its been 2 days and I have been trying since No Emails ? anyone know what happening ?
-
Does seem like the earlier versions of firefox did work better. Does not seem to change anything when you have a new version...please help
-
Locking the screen shortcut from a standard keyboard
Hello, Have a question. Does anyone out there know what the shortcut to lock the screen is when using a (no mac) standard external keyboard? I know that Control+Option+Eject works on the Macbook keyboard, but I'm suing a standard Microsoft Keyboard.
-
Hello folks - I am a new Oracle user here - so please forgive me if this is the wrong area.. I have a setup question for AP We currently are on 11.5.10 and running on linux86. When we run checks to pay employees, the only address that works is our de
-
New ipod nano on ebay dont buy its a fake warning
JUST A WARNING IM SO ANGRY: i was bought what appeared to be a 6th gen nano for xmas, (http://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=220691379928&ssPageName=STR K:MEWNX:IT) well all the pictures we of the genuine nano 6th gen ipod. it even cam