Controller file for SQL Loader
Guys,
My data is coming in flat from some third party in the below format
1,Stewart,"Current Address: Street-A, Flat-507, London, UK
Permanent Address: Street-B, Flat-201, London, UK"
2,Patrick,"Current Address: Street-A, Flat-507, Bangalore, India
Permanent Address: Street-B, Flat-201, Delhi, India"
I want to load the data in a custom table by using a SQL loader program which will use a controller program. This program needs to populate the table in below manner
SEQ EMPLOYEE ADDRESS INFORMATION
1 Stewart Current Address: Street-A, Flat-507, London, UK
Permanent Address: Street-B, Flat-201, London, UK
2 Patrick Current Address: Street-A, Flat-507, Bangalore, India
Permanent Address: Street-B, Flat-201, Delhi, India
Could you guys please help me in writing the controller file to read the data from file and populate the table as described above. Thanks.
-Sunil
It is a little hard to tell exactly what your data file looks like and what you want your results to be, due to this forum mangling things a bit and adding extra lines and such. In general you can either use CONTINUEIF or CONCATENATE. I have demonstrated both below. When using CONTINUEIF, it assumes each additional line begins with the word "Permanent". When using CONCATENATE, it assumes each two lines constitutes one record. I have also used REPLACE to add a line feed in front of "Permanent", after it is removed during continuation or concatenation.
SCOTT@orcl12c> host type test.dat
1,Stewart,"Current Address: Street-A, Flat-507, London, UK
Permanent Address: Street-B, Flat-201, London, UK"
2,Patrick,"Current Address: Street-A, Flat-507, Bangalore, India
Permanent Address: Street-B, Flat-201, Delhi, India"
SCOTT@orcl12c> host type test.ctl
load data
infile test.dat
continueif next preserve (1:9) = 'Permanent'
into table test_tab
fields terminated by ','
optionally enclosed by '"'
trailing nullcols
(seq, employee,
address_information "replace (:address_information, 'Permanent', CHR(10) || 'Permanent')")
SCOTT@orcl12c> host type test2.ctl
load data
infile test.dat
concatenate 2
into table test_tab
fields terminated by ','
optionally enclosed by '"'
trailing nullcols
(seq, employee,
address_information "replace (:address_information, 'Permanent', CHR(10) || 'Permanent')")
SCOTT@orcl12c> create table test_tab
2 (seq number,
3 employee varchar2(8),
4 address_information varchar2(200))
5 /
Table created.
SCOTT@orcl12c> host sqlldr scott/tiger control=test.ctl log=test.log
SQL*Loader: Release 12.1.0.1.0 - Production on Mon Dec 16 13:11:40 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
Commit point reached - logical record count 1
Commit point reached - logical record count 2
Table TEST_TAB:
2 Rows successfully loaded.
Check the log file:
test.log
for more information about the load.
SCOTT@orcl12c> column address_information format a60
SCOTT@orcl12c> select * from test_tab
2 /
SEQ EMPLOYEE ADDRESS_INFORMATION
1 Stewart Current Address: Street-A, Flat-507, London, UK
Permanent Address: Street-B, Flat-201, London, UK
2 Patrick Current Address: Street-A, Flat-507, Bangalore, India
Permanent Address: Street-B, Flat-201, Delhi, India
2 rows selected.
SCOTT@orcl12c> truncate table test_tab
2 /
Table truncated.
SCOTT@orcl12c> host sqlldr scott/tiger control=test2.ctl log=test2.log
SQL*Loader: Release 12.1.0.1.0 - Production on Mon Dec 16 13:11:40 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
Commit point reached - logical record count 2
Table TEST_TAB:
2 Rows successfully loaded.
Check the log file:
test2.log
for more information about the load.
SCOTT@orcl12c> select * from test_tab
2 /
SEQ EMPLOYEE ADDRESS_INFORMATION
1 Stewart Current Address: Street-A, Flat-507, London, UK
Permanent Address: Street-B, Flat-201, London, UK
2 Patrick Current Address: Street-A, Flat-507, Bangalore, India
Permanent Address: Street-B, Flat-201, Delhi, India
2 rows selected.
Similar Messages
-
Concatenate positions in control file for sql*loader utility.
Is there a way to concatenate several character positions in a control file for sql*loader???
example... field1 position(1:3) || position(5:7) ????
I would rather not create any unnecessary temp tables for a straight load...How about...
/code
field1 position(1:7) char "substr(:field1, 1, 3) || substr(:field1, 5)" -
Control file for SQL*Loader
Hello,
I have the following file (2 line) which I want to intend in an oracle database with the SQL*Loader.
tommy050+3423
tom 070-0006
The file consists of two lines and three columns.
Column1 has always 5 digits and is a String. Additional empty digits will be filled with blanks. Here: tommy and tom
Column2 has three digits and is a number. Here: 050 and 070.
Column 3 is also a number and consists of four digits. Additionally there is a fifth digit for the leading sign ("+" or "-"). Here: +3423 and -0006.
This file I want to import in my table MyTab.
MyTab(
thing VARCHAR2(5)
number NUMBER(3)
amount NUMBER
How would the control file for the SQL*Loader look like for this example?
Regards
HomerThis is a start (untested)
LOAD DATA
INFILE 'sample.dat'
BADFILE 'sample.bad'
DISCARDFILE 'sample.dsc'
APPEND
INTO TABLE MyTab
( thing POSITION(1:5) CHAR ,
NUMBER POSITION(6:8) INTEGER,
amount POSITION(9:12) INTEGER )Can you have a column named number? -
Utility to convert ASE fmt files to ctl files for sql*loader
Does anyone have a utility to convert bcp format files to oracle ctl files for use with sql*loader?
Hi,
The Oracle Migration Workbench will produce both BCP and the equivalent Oracle .ctl files for all tables in a database.
Regards
John -
New Line in data file for sql loader
Hi all,
I have a requirement to load a data file in which some of the rows are in new line as given in following example. If i put a '\n' character in control file, then it will load only those records which have newline in them ignoring non new line ones. please have a look at control fileand provide a solution to it :-
32 grand street ~NY~NY
42 riverdrive,
apt 1B ~PL~TX
Richardson road
apt 32~ SF ~CA
As you see there are newline characters in record 2 and 3, right now my control file looks like this -
LOAD DATA
INFILE 'example.txt' "STR '\r\n'"
INTO TABLE "temp_table"
INSERT
FIELDS TERMINATED BY '~'
TRAILING NULLCOLS
(ADDRESS1,
CITY,
STATE)
If i remove "STR '\r\n'" , then the data gets loaded into different rows like the third record looks like this -
ADDRESS CITY STATE
Richardson road null null
apt 32 SF CA
Please help.Your requirement is very unclear.
Is this your INPUT ?
--record 1
32 grand street ~NY~NY
--record 2
42 riverdrive, --"NL"
apt 1B ~PL~TX
--record 3
Richardson road --"NL"
apt 32~ SF ~CAIs this your OUTPUT?
ADDRESS CITY STATE
32 grand street NY NY
42 riverdrive, null null
apt 1B PL TX
Richardson road null null
apt 32 SF CAAre you using Linux or Windows platform?
You should have only one record terminator character.
*\r\n* is Windows The End of Line character (The Carriage Return + The Line Feed).
"STR '\r\n'" = "str X'0D0A'"
If there is no data after \r\n, columns will be null.
http://docs.oracle.com/cd/B28359_01/server.111/b28319/ldr_concepts.htm#i1005800 -
Format input for Sql*Loader
I have a problem in formating an input file for Sql*Loader
The file has the following format:
1 Jeff
2 Kyle
3 Tom
1 Lisa
2 Bryan
3 Max
4 Rob
5 Steve
1 Richard
2 Mary
and so on
the file should look as follows to process with Sql*Loader:
1 Jeff 2 Kyle 3 Tom
1 Lisa 2 Bryan 3 Max 4 Rob 5 Steve
1 Richard 2 Mary
any suggestions?
Edited by: royce on Sep 18, 2008 2:08 AMroyce wrote:
I have a problem in formating an input file for Sql*Loader
The file has the following format:
1 Jeff
2 Kyle
3 Tom
1 Lisa
2 Bryan
3 Max
4 Rob
5 Steve
1 Richard
2 Mary
and so on
the file should look as follows to process with Sql*Loader:
1 Jeff 2 Kyle 3 Tom
1 Lisa 2 Bryan 3 Max 4 Rob 5 Steve
1 Richard 2 Mary
any suggestions?
Edited by: royce on Sep 18, 2008 2:08 AMActually, looking at your data again, you don't have a consistent format, so this makes it really difficult for something like SQL*Loader to process. If there were just e.g. 3 rows per record then that would be simple, but you are expecting SQL*Loader to look ahead to determine if there is more data or if it's reached the end of the record.
You would be better to load the data (external tables would probably make it quicker and easier as the data will come in the order as it is specified in the file, which would help as you have no group identifier against the sets of records) as it is and then use SQL to process them.
You haven't specified the resulting table that the data is loaded into. Is that just one single VARCHAR column with all the data concatenated or is it several columns (one for each name)? -
Substitution for logical OR usage in control file of sql loader
Hi,
can anyone suggest me a substituion method to use the functionality of logical OR in control file developed for sql loader.
Ex:
load data
append
into table ABC
when (1:2) = 'AD'
--AND ((27:28)= '01' OR (27:28)= '02')
AND (1222:1222) = '1'
trailing nullcols
Note: condition commented in the above example need to be replaced.
one way of doing it can be splitting blocks for each condition.
Then it will look like:
load data
append
into table ABC
when (1:2) = 'AD'
AND (27:28)= '01'
AND (1222:1222) = '1'
trailing nullcols
into table ABC
when (1:2) = 'AD'
AND (27:28)= '02'
AND (1222:1222) = '1'
trailing nullcols
So, i'm looking for a better way than this, as i cannot work with the above
splitting logic because i'm dealing with lot many conditions.
Thanx inadvance
KishoreHi,
can anyone suggest me a substituion method to use the functionality of logical OR in control file developed for sql loader.
Ex:
load data
append
into table ABC
when (1:2) = 'AD'
--AND ((27:28)= '01' OR (27:28)= '02')
AND (1222:1222) = '1'
trailing nullcols
Note: condition commented in the above example need to be replaced.
one way of doing it can be splitting blocks for each condition.
Then it will look like:
load data
append
into table ABC
when (1:2) = 'AD'
AND (27:28)= '01'
AND (1222:1222) = '1'
trailing nullcols
into table ABC
when (1:2) = 'AD'
AND (27:28)= '02'
AND (1222:1222) = '1'
trailing nullcols
So, i'm looking for a better way than this, as i cannot work with the above
splitting logic because i'm dealing with lot many conditions.
Thanx inadvance
Kishore -
Changing the File path for SQL Loader Recognition
I am learning how to create a control file. The names.ctl file was placed in "Names" folder in my "C:\Windows" file.
I get the following error when trying to run the script for sqlldr:
Sql*Loader-500 Unable to open file.
Sql*Loader-553 file not found
Sql*Loader-System error: the system cannot find the specified file.
The path on the folder in c:\Windows\names\names.ctl
How do I make SQL Loader recognize it?Pl post details of OS and database versions. Have you tried this ?
sqlldr CONTROL=c:\Windows\names\names.ctl ...HTH
Srini -
Loading huge file with Sql-Loader from Java
Hi,
I have a csv file with aprox. 3 and a half million records.
I load this data with sqlldr from within java like this:
String command = "sqlldr userid=" + user + "/" + pass
+ "@" + service + " control='" + ctlFile + "'";
System.out.println(command);
if (System.getProperty("os.name").contains("Windows")) {
p = Runtime.getRuntime().exec("cmd /C " + command);
} else {
p = Runtime.getRuntime().exec("sh -c " + command);
}it does what I want to, load the data to a certain table, BUT it takes too much time, Is there a faster way to load data to an oracle db from within java?
Thanks, any advice is very welcomeHave your DBA work on this issue - they can monitor and check performance of SQL*Loader
SQL*Loader performance tips [Document 28631.1]
SQL*LOADER SLOW PERFORMANCE [Document 1026145.6]
Master Note for SQL*Loader [Document 1264730.1]
HTH
Srini -
Issue while loading a csv file using sql*loader...
Hi,
I am loading a csv file using sql*loader.
On the number columns where there is data populated in them, decimal number/integers, the row errors out on the error -
ORA-01722: invalid number
I tried checking the value picking from the excel,
and found the chr(13),chr(32),chr(10) values characters on the value.
ex: select length('0.21') from dual is giving a value of 7.
When i checked each character as
select ascii(substr('0.21',5,1) from dual is returning a value 9...etc.
I tried the following command....
"to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
to remove all the non-number special characters. But still facing the error.
Please let me know, any solution for this error.
Thanks in advance.
Kirancontrol file:
OPTIONS (ROWS=1, ERRORS=10000)
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE '$Xx_TOP/bin/ITEMS.csv'
APPEND INTO TABLE XXINF.ITEMS_STAGE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
ItemNum "trim(replace(replace(:ItemNum,chr(9),''),chr(13),''))",
cross_ref_old_item_num "trim(replace(replace(:cross_ref_old_item_num,chr(9),''),chr(13),''))",
Mas_description "trim(replace(replace(:Mas_description,chr(9),''),chr(13),''))",
Mas_long_description "trim(replace(replace(:Mas_long_description,chr(9),''),chr(13),''))",
Org_description "trim(replace(replace(:Org_description,chr(9),''),chr(13),''))",
Org_long_description "trim(replace(replace(:Org_long_description,chr(9),''),chr(13),''))",
user_item_type "trim(replace(replace(:user_item_type,chr(9),''),chr(13),''))",
organization_code "trim(replace(replace(:organization_code,chr(9),''),chr(13),''))",
primary_uom_code "trim(replace(replace(:primary_uom_code,chr(9),''),chr(13),''))",
inv_default_item_status "trim(replace(replace(:inv_default_item_status,chr(9),''),chr(13),''))",
inventory_item_flag "trim(replace(replace(:inventory_item_flag,chr(9),''),chr(13),''))",
stock_enabled_flag "trim(replace(replace(:stock_enabled_flag,chr(9),''),chr(13),''))",
mtl_transactions_enabled_flag "trim(replace(replace(:mtl_transactions_enabled_flag,chr(9),''),chr(13),''))",
revision_qty_control_code "trim(replace(replace(:revision_qty_control_code,chr(9),''),chr(13),''))",
reservable_type "trim(replace(replace(:reservable_type,chr(9),''),chr(13),''))",
check_shortages_flag "trim(replace(replace(:check_shortages_flag,chr(9),''),chr(13),''))",
shelf_life_code "trim(replace(replace(replace(replace(:shelf_life_code,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
shelf_life_days "trim(replace(replace(replace(replace(:shelf_life_days,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
lot_control_code "trim(replace(replace(:lot_control_code,chr(9),''),chr(13),''))",
auto_lot_alpha_prefix "trim(replace(replace(:auto_lot_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_lot_number "trim(replace(replace(:start_auto_lot_number,chr(9),''),chr(13),''))",
negative_measurement_error "trim(replace(replace(replace(replace(:negative_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
positive_measurement_error "trim(replace(replace(replace(replace(:positive_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
serial_number_control_code "trim(replace(replace(:serial_number_control_code,chr(9),''),chr(13),''))",
auto_serial_alpha_prefix "trim(replace(replace(:auto_serial_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_serial_number "trim(replace(replace(:start_auto_serial_number,chr(9),''),chr(13),''))",
location_control_code "trim(replace(replace(:location_control_code,chr(9),''),chr(13),''))",
restrict_subinventories_code "trim(replace(replace(:restrict_subinventories_code,chr(9),''),chr(13),''))",
restrict_locators_code "trim(replace(replace(:restrict_locators_code,chr(9),''),chr(13),''))",
bom_enabled_flag "trim(replace(replace(:bom_enabled_flag,chr(9),''),chr(13),''))",
costing_enabled_flag "trim(replace(replace(:costing_enabled_flag,chr(9),''),chr(13),''))",
inventory_asset_flag "trim(replace(replace(:inventory_asset_flag,chr(9),''),chr(13),''))",
default_include_in_rollup_flag "trim(replace(replace(:default_include_in_rollup_flag,chr(9),''),chr(13),''))",
cost_of_goods_sold_account "trim(replace(replace(:cost_of_goods_sold_account,chr(9),''),chr(13),''))",
std_lot_size "trim(replace(replace(replace(replace(:std_lot_size,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
sales_account "trim(replace(replace(:sales_account,chr(9),''),chr(13),''))",
purchasing_item_flag "trim(replace(replace(:purchasing_item_flag,chr(9),''),chr(13),''))",
purchasing_enabled_flag "trim(replace(replace(:purchasing_enabled_flag,chr(9),''),chr(13),''))",
must_use_approved_vendor_flag "trim(replace(replace(:must_use_approved_vendor_flag,chr(9),''),chr(13),''))",
allow_item_desc_update_flag "trim(replace(replace(:allow_item_desc_update_flag,chr(9),''),chr(13),''))",
rfq_required_flag "trim(replace(replace(:rfq_required_flag,chr(9),''),chr(13),''))",
buyer_name "trim(replace(replace(:buyer_name,chr(9),''),chr(13),''))",
list_price_per_unit "trim(replace(replace(replace(replace(:list_price_per_unit,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
taxable_flag "trim(replace(replace(:taxable_flag,chr(9),''),chr(13),''))",
purchasing_tax_code "trim(replace(replace(:purchasing_tax_code,chr(9),''),chr(13),''))",
receipt_required_flag "trim(replace(replace(:receipt_required_flag,chr(9),''),chr(13),''))",
inspection_required_flag "trim(replace(replace(:inspection_required_flag,chr(9),''),chr(13),''))",
price_tolerance_percent "trim(replace(replace(replace(replace(:price_tolerance_percent,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
expense_account "trim(replace(replace(:expense_account,chr(9),''),chr(13),''))",
allow_substitute_receipts_flag "trim(replace(replace(:allow_substitute_receipts_flag,chr(9),''),chr(13),''))",
allow_unordered_receipts_flag "trim(replace(replace(:allow_unordered_receipts_flag,chr(9),''),chr(13),''))",
receiving_routing_code "trim(replace(replace(:receiving_routing_code,chr(9),''),chr(13),''))",
inventory_planning_code "trim(replace(replace(:inventory_planning_code,chr(9),''),chr(13),''))",
min_minmax_quantity "trim(replace(replace(replace(replace(:min_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
max_minmax_quantity "trim(replace(replace(replace(replace(:max_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
planning_make_buy_code "trim(replace(replace(:planning_make_buy_code,chr(9),''),chr(13),''))",
source_type "trim(replace(replace(:source_type,chr(9),''),chr(13),''))",
mrp_safety_stock_code "trim(replace(replace(:mrp_safety_stock_code,chr(9),''),chr(13),''))",
material_cost "trim(replace(replace(:material_cost,chr(9),''),chr(13),''))",
mrp_planning_code "trim(replace(replace(:mrp_planning_code,chr(9),''),chr(13),''))",
customer_order_enabled_flag "trim(replace(replace(:customer_order_enabled_flag,chr(9),''),chr(13),''))",
customer_order_flag "trim(replace(replace(:customer_order_flag,chr(9),''),chr(13),''))",
shippable_item_flag "trim(replace(replace(:shippable_item_flag,chr(9),''),chr(13),''))",
internal_order_flag "trim(replace(replace(:internal_order_flag,chr(9),''),chr(13),''))",
internal_order_enabled_flag "trim(replace(replace(:internal_order_enabled_flag,chr(9),''),chr(13),''))",
invoice_enabled_flag "trim(replace(replace(:invoice_enabled_flag,chr(9),''),chr(13),''))",
invoiceable_item_flag "trim(replace(replace(:invoiceable_item_flag,chr(9),''),chr(13),''))",
cross_ref_ean_code "trim(replace(replace(:cross_ref_ean_code,chr(9),''),chr(13),''))",
category_set_intrastat "trim(replace(replace(:category_set_intrastat,chr(9),''),chr(13),''))",
CustomCode "trim(replace(replace(:CustomCode,chr(9),''),chr(13),''))",
net_weight "trim(replace(replace(replace(replace(:net_weight,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
production_speed "trim(replace(replace(:production_speed,chr(9),''),chr(13),''))",
LABEL "trim(replace(replace(:LABEL,chr(9),''),chr(13),''))",
comment1_org_level "trim(replace(replace(:comment1_org_level,chr(9),''),chr(13),''))",
comment2_org_level "trim(replace(replace(:comment2_org_level,chr(9),''),chr(13),''))",
std_cost_price_scala "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
supply_type "trim(replace(replace(:supply_type,chr(9),''),chr(13),''))",
subinventory_code "trim(replace(replace(:subinventory_code,chr(9),''),chr(13),''))",
preprocessing_lead_time "trim(replace(replace(replace(replace(:preprocessing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
processing_lead_time "trim(replace(replace(replace(replace(:processing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
wip_supply_locator "trim(replace(replace(:wip_supply_locator,chr(9),''),chr(13),''))"
Sample data from csv file.
"9901-0001-35","390000","JMKL16 Pipe bend 16 mm","","JMKL16 Putkikaari 16 mm","","AI","FJE","Ea","","","","","","","","","","","","","","","","","","","","","","","","","21-21100-22200-00000-00000-00-00000-00000","0","21-11100-22110-00000-00000-00-00000-00000","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.1","Pull","AFTER PROD","","","Locator for Production"
The load errors out on especially two columns :
1) std_cost_price_scala
2) list_price_per_unit
both are number columns.
And when there is data being provided on them. It errors out. But, if they are holding null values, the records go through fine.
Message was edited by:
KK28 -
How to set default directory for SQL LOADER
hi all,
i wanted to know how can we setup a default directory for SQL LOADER if at all we can. i connot place my control and data files in local system and use them at command prompt. rather i wanted to know if we can set default directory that the loader can use. this requirement is basically to enable all the clients to upload the data placed on the server and use the loader utility.
thanks in advance,
BasavrajElla,
You don't say which version of SQL Developer you are using via Citrix, but just setting the SQLDEVELOPER_USER_DIR hasn't worked for a long time (see Re: SQLDEVELOPER_USER_DIR does not function anymore). Also, since version 1.5, the default for the user directory (now set via ide.user.dir as shown below) is under the user profile area (relative to %APPDATA%), which you should be able to write to, even on Citrix.
Assuming that neither of those help, you will need to get whoever installed SQL Developer on the Citrix C: drive to modify the sqldeveloper.conf to have a line like, where the path exists for everyone who will be using the shared SQL Developer (assumes everyone has a H: drive):
AddVMOption -Dide.user.dir=H:\sqldeveloperAn alternative (depending on how you start SQL Developer via Citrix), is to create your own shortcut to start SQL Developer with something like:
sqldeveloper -J-Dide.user.dir="%SQLDEVELOPER_USER_DIR%"theFurryOne -
Problem loading XML-file using SQL*Loader
Hello,
I'm using 9.2 and tryin to load a XML-file using SQL*Loader.
Loader control-file:
LOAD DATA
INFILE *
INTO TABLE BATCH_TABLE TRUNCATE
FIELDS TERMINATED BY ','
FILENAME char(255),
XML_DATA LOBFILE (FILENAME) TERMINATED BY EOF
BEGINDATA
data.xml
The BATCH_TABLE is created as:
CREATE TABLE BATCH_TABLE (
FILENAME VARCHAR2 (50),
XML_DATA SYS.XMLTYPE ) ;
And the data.xml contains the following lines:
<?xml version="2.0" encoding="UTF-8"?>
<!DOCTYPE databatch SYSTEM "databatch.dtd">
<batch>
<record>
<data>
<type>10</type>
</data>
</record>
<record>
<data>
<type>20</type>
</data>
</record>
</batch>
However, the sqlldr gives me an error:
Record 1: Rejected - Error on table BATCH_TABLE, column XML_DATA.
ORA-21700: object does not exist or is marked for delete
ORA-06512: at "SYS.XMLTYPE", line 0
ORA-06512: at line 1
If I remove the first two lines
"<?xml version="2.0" encoding="UTF-8"?>"
and
"<!DOCTYPE databatch SYSTEM "databatch.dtd">"
from data.xml everything works, and the contentents of data.xml are loaded into the table.
Any idea what I'm missing here? Likely the problem is with special characters.
Thanks in advance,I'm able to load your file just by removing the second line <!DOCTYPE databatch SYSTEM "databatch.dtd">. I dont have your dtd file, so skipped that line. Can you check if it's problem with ur DTD?
-
Unknown issue while loading .dbf file by sql loader
Hi guys,
I am having a unknown issue while loading .dbf file by sql loader.
I need to load .dbf data into oracle table.for this I converted .dbf file by just changing its extension as .csv . file structure after changing .dbf to .csv --
C_N_NUMBER,COMP_CODE,CPT_CODE,C_N_AMT,CM_NUMBER
1810/4,LKM,30,45,683196
1810/5,LKM,30,45,683197
1810/6,LKM,30,45,683198
1810/7,LKM,30,135,683200
1810/8,LKM,30,90,683201
1810/9,LKM,1,45,683246
1810/9,LKM,2,90,683246
1810/10,LKF,1,90,683286
2810/13,LKJ,1,50.5,680313
2810/14,LKJ,1,50,680316
1910/1,LKQ,1,90,680344
3910/2,LKF,1,238.12,680368
3910/3,LKF,1,45,680382
3910/4,LKF,1,45,680395
7910/5,LKS,1,45,680397
7910/6,LKS,1,90,680400
7910/7,LKS,1,45,680401
7910/8,LKS,1,238.12,680414
7910/9,LKS,1,193.12,680415
7910/10,LKS,1,45,680490
then I am loading it by sql loader.but I am getting always error below ...
Record 1: Rejected - Error on table C_N_DETL_TAB, column CPT_CODE.
ORA-01438: value larger than specified precision allowed for this column
Record 2: Rejected - Error on table C_N_DETL_TAB, column CPT_CODE.
ORA-01438: value larger than specified precision allowed for this column
table structure-
create table C_N_DETL_tab
"C_N_NUMBER" VARCHAR2(13),
"COMP_CODE" VARCHAR2(3),
"CPT_CODE" NUMBER(4),
"C_N_AMT" NUMBER(20,18),
"CM_NUMBER" NUMBER(7)
control file-
options(skip=1)
load data
infile '/softdump/pc/C_N_DETL.csv'
badfile '/softdump/pc/C_N_DETL.bad'
discardfile '/softdump/pc/C_N_DETL.dsc'
into table C_N_DETL_tab
truncate
FIELDS TERMINATED BY ","
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
C_N_NUMBER CHAR,
COMP_CODE CHAR,
CPT_CODE INTEGER,
C_N_AMT INTEGER,
CM_NUMBER INTEGER
but guys when I am increasing size of all columns of tabel upto its max value then data is loaded but when I am checking column max length after data loaded then its very less..
changed table structure-
create table C_N_DETL_tab
"C_N_NUMBER" VARCHAR2(130),
"COMP_CODE" VARCHAR2(30),
"CPT_CODE" NUMBER(32), ---- max value of number
"C_N_AMT" NUMBER(32,18), ---- max value of number
"CM_NUMBER" NUMBER(32) ---- max value of number
now i ma running ...
sqlldr express/express control=C_N_DETL.ctl log=C_N_DETL.log
o/p-
Table C_N_DETL_TAB, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
---------- ---- ---- C_N_NUMBER FIRST * , O(") CHARACTER
COMP_CODE NEXT * , O(") CHARACTER
CPT_CODE NEXT 4 INTEGER
C_N_AMT NEXT 4 INTEGER
CM_NUMBER NEXT 4 INTEGER
Table C_N_DETL_TAB:
20 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
select max(length( CPT_CODE))from C_N_DETL_tab ---> 9
can u tell me why I need to increase size of table columns upto max value?although length of data is soo much less.
kindly check it..thnx in advance...
rgds,
pcNo database version of course. Unimportant.
If I recall correctly, it is 'integer external ' and you would best double quoting the alphanumerics (which you didn't ).
Try changing integer in integer external in the ctl file.
Sybrand Bakker
Senior Oracle DBA -
Need suggestions on loading 5000+ files using sql loader
Hi Guys,
I'm writing a shell script to load more than 5000 files using sql loader.
My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700.
Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)
Please share your thoughts on this.
Thanks.Manohar wrote:
I'm writing a shell script to load more than 5000 files using sql loader.
My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700. Concurrent load you mean? Parallel processing implies take a workload, breaking that up into smaller workloads, and doing that in parallel. This is what the Parallel Query feature does in Oracle.
SQL*Loader does not do that for you. It uses a single session to load a single file. To make it run in parallel, requires manually starting multiple loader sessions and perform concurrent loads.
Have a look at Parallel Data Loading Models in the Oracle® Database Utilities guide. It goes into detail on how to perform concurrent loads. But you need to parallelise that workload yourself (as explained in the manual).
Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)Consider doing it the way that Parallel Query does (as I've mentioned above). Take the workload (all files). Break the workload up into smaller sub-workloads (e.g. 50 files to be loaded by a process). Start a 100 processes in parallel and provide each one with a sub-workload to do (100 processes each loading 50 odd files).
This is a lot easier to manage than starting for example a 5000 load processes and then trying some kind of delay method to ensure that not all hit the database at the same time.
I'm loading about 100+ files (3+ million rows) every 60 seconds 24x7 using SQL*Loader. Oracle is quite scalable and SQL*Loader quite capable. -
Upload Multiple files in SQL * Loader in one session
Dear all,
I want to upload multiple files using SQL*Loader in one go. In Unix its very easy as I have a quite experience of that. Could any body tell what is a way in Windows to upload multiple files in one go while using SQL * Loader. I want to run that using DOS's Batch file.
Thanks
GhulamIn Unix its very easy like we are putting $ sign.
sqlldr userid=?????/?????
control="/u12/cad_delta.ctl",log="/u12/full_extract/$1",data="/u12/$1",bad="/u12//full_extract/$1",errors=500,silent=feedback
Suggest for Windows? It should read all .DAT files of the Windows Folder.
Thanks
Maybe you are looking for
-
Yosemite - Opinion - Difficulties with installation
The following is my personal opinion after having installed OS X 10.10 aka. Yosemite. I hope this is the right community to post this. I also had a little trouble with the installation. Perchance this'll help other people deal with similar dificultie
-
Satellite L300 - Battery not charging but power is reaching to motherboard
Hi I have dropped the laptop and after that battery is not getting charged. Laptop is working fine with battery only. I have opened the laptop to find if there is any thing got dislodged. But everything was fine. I have checked the power is reaching
-
Hi I want add new field in view GS_CM/AddDoc. I have studied a documentation http://www.sdn.sap.com/irj/scn/elearn?rid=/library/uuid/509a21a9-24be-2b10-d39b-de4c6b3fc222 but for a component GS_CM the button "Create New Field" is inactive. Can I add t
-
Transfer iWeb content to new computer
My G4 finally died, but before I could back up any iWeb files. I have bought a new MacPro now, and need to know if its possible to get it all back from the internet version somehow?
-
OVM3.2.2 Unexpected CPU Pinning for Win2008EntR2
Hi all, I installed OVM 3.2.2 on HP Gen 8 server with 16 cores for testing. Then : 1. Windows 2008 r2 Enterprise edition is installed. 2. No CPU pinning was set 3. When I tried to check the CPU allocation with the "xm vcpu-list" command, the informat