SQL Loader: Multiple data files to Multiple Tables
How do you create one control file that refrences multiple data file and each file loads data in a different table.
Eg.
DataFile1 --> Table 1
DataFile2 --> Table 2
Contents and Structure of both data files are different. Data file is comma seperated.
Below example is for 1 data file to 1 table. Need to modify this or create a wrapper that would call multiple control files.
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
Welcome any other suggestions.
I was thinking if there is a way to indicate what file goes with what table (structure) in one control file.
Example ( This does not work but wondering if something similar is allowed..)
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
INFILE DataFile2
BADFILE DataFile2_bad.txt'
DISCARDFILE DataFile2_dsc.txt'
REPLACE
INTO TABLE "T2"
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
T2Col1,
T2Col2,
T2Col3
)
Similar Messages
-
SQL*Loader Sequential Data File Record Processing?
If I use the conventional path will SQL*Loader process a data file sequentially from top to bottom? I have a file comprised of header and detail records with no value found in the detail records that can be used to relate to the header records. The only option is to derive a header value via a sequence (nextval) and then populate the detail records with the same value pulled from the same sequence (currval). But for this to work SQL*Loader must process the file in the exact same sequence that the data has been written to the data file. I've read through the 11g Oracle® Database Utilities SQL*Loader sections looking for proof that this is what will happen but haven't found this information and I don't want to assume that SQL*Loader will always process the data file records sequentially.
Thank youOracle Support responded with the following statement.
"Yes, SQL*LOADER process data file from top to bottom.
This was touched in the note below:
SQL*Loader - How to Load a Single Logical Record from Physical Records which Include Linefeeds (Doc ID 160093.1)"
Jason -
SQL Loader - CSV Data file with carraige returns and line fields
Hi,
I have a CSV data file with occasional carraige returns and line feeds in between, which throws my SQL loader script off. Sql loader, takes the characters following the carraige return as a new record and gives me error. Is there a way I could handle carraige returns and linefeeds in SQL Loader.
Please help. Thank you for your time.
This is my Sql Loader script.
load data
infile 'D:\Documents and Settings\user1\My Documents\infile.csv' "str '\r\n'"
append
into table MYSCHEMA.TABLE1
fields terminated by ','
OPTIONALLY ENCLOSED BY '"'
trailing nullcols
( NAME CHAR(4000),
field2 FILLER,
field3 FILLER,
TEST DEPT CHAR(4000)
)You can "regexp_replace" the columns for special characters
-
Sql loader maximum data file size..?
Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
My question is, is there any maximum data file size. The following code from my shell script.
SQLLDR=
DB_USER=
DB_PASS=
DB_SID=
controlFile=
dataFile=
logFileName=
badFile=
${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
control=$controlFile \
data=$dataFile \
log=$logFileName \
bad=$badFile \
direct=true \
silent=all \
errors=5000Here is my control file code
LOAD DATA
APPEND
INTO TABLE KEY_HISTORY_TBL
WHEN OLD_KEY <> ''
AND NEW_KEY <> ''
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
SYS_DATE "SYSTIMESTAMP",
STATUS CONSTANT 'C'
)Thanks,
-Soma
Edited by: user4587490 on Jun 15, 2011 10:17 AM
Edited by: user4587490 on Jun 15, 2011 11:16 AMHello Soma.
How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
Hope this helps,
Luke
Please mark the answer as helpful or answered if it is so. If not, provide additional details.
Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better. -
SQL Loader reads Data file Sequentially or Randomly?
Will SQL Loader loads the data read from file Sequentially or Randomly?
I have the data file like the below
one
two
three
four
and my control file is
LOAD DATA
INFILE *
TRUNCATE
INTO TABLE T TRAILING NULLCOLS
x RECNUM,
y POSITION (1:4000)
so my table will be polulated like
X Y
1 one
2 Two
3 Three
4 Four
Will this happend sequentially even for the large data sets? say i have from one to one million datas in my data files.
Please clarify.
Thanks,
Rajesh.SQL Loader may read the file sequentially, but you should not rely on the physical ordering of the rows in the table.
It looks like that's what you were hinting at. -
How to batch load multi data files to several tables
Hi,
One customer have such data structure and have large number of data(arround 10 Million). I think it's proper to convert them to data that SQL loader can recognize and then insert into Oracle 8 or 9. The question is how to convert?
Or maybe to insert them one by one is simpler?
1: Component of Data
The data file consists of nameplate and some records.
1.1 Structure of nameplate
ID datatype length(byte) comments
1 char 4
2 char 19
3 char 2
4 char 6 records in this file
5 char 8
1.2 structure of each record
ID datatype length(byte)
1 char 21
2 char 18
3 char 30
4 char 1
5 char 8
6 char 2
7 char 6
8 char 70
9 char 30
10 char 8
11 char 8
12 char 1
13 char 1
14 char 1
15 char 30
16 char 20
17 char 6
18 char 70
19 char 5
24 binï¼blobï¼ 1024
25 bin(blob) defined in ID19
2: data file and table spaces in database
dataID 1-13 of each record insert to table1,
14-18 to table2, and 19,24,25 to table3
Is there a method to convert them to some data that SQL loader can input and then at a whole load into Oracle 8 or 9?
I've check the Oracle Utilities docs, but did not find a way to load so many data files at a batch action.
In my view the solution consist in two ways:
1, Load each of them individualy individually to different tables by some programme. But the speed may be problem because the uninterrupted db connections and close.
2, Convert them to one or three files then use SQL loader.
But either isn't much easy, I wonder if there's a better method to handle.
Many thanks!My coworker tried that, but it dragged down portal.
How about to update WWDOC_DOCUMENT$ table, then use WWSBR_API.add_item_post_upload to update folder information etc.?
If possible, is there any sample code? -
Space allocation on 11g R2 on multiple data files in one tablespace
hello
if the following is explained in Oracle 11g R2 documentation please send a pointer. I cant find it myself right now.
my question is about space allocation (during inserts and during table data load) in one table space containing multiple data files.
suppose i have Oracle 11g R2 database and I am using OMF and Oracle ASM on Oracle Linux 64-bit.
I have one ASM disk group called ASMDATA with 50 ASM disks in it.
I have one tablespace called APPL_DATA with 50 data files on it, each file = 20 GB (equal size), to contain one 1 TB table calll MY_FACT_TABLE.
During Import Data Pump or during application doing SQL Inserts how will Oracle allocate space for the table?
Will it fill up one data file completely and then start allocating from second file and so on, sequentially moving from file to file?
And when all files are full, which file will it try to autoextend (if they all allow autoextend) ?
Or will Oracle use some sort of proportional fill like MS SQL Server does i.e. allocate one extent from data file 1, next extent from data file 2,.... and then wrap around again? In other words it will keep all files equally allocated as much as possible so at any point in time they will have approximately the same amount of data in them (assuming same initial size ?
Or some other way?
thanks.On 10.2.0.4, regular data files, autoallocate, 8K blocks, I've noticed some unexpected things. I have an old, probably obsolete habit of making my datafiles 2G fixed, except for the last, which I make 200M autoextend max 2G. So what I see happening in normal operations is, the other files fill up in a round-robin fashion, then the last file starts to grow. So it is obvious to me at that time to extend the file to 2G, make it noautoexented, and add another file. My schemata tend to be in the 50G range, with 1 or 2 thousand tables. When I impdp, I notice it sorts them by size, importing the largest first. I never paid too much attention to the smaller tables, since LMT algorithms seem good enough to simply not worry about it.
I just looked (with dbconsole tablespace map) at a much smaller schema I imported not long ago, where the biggest table was 20M in 36 extents, second was 8M in 23 extents, and so on, total around 200M. I had made 2 data files, the first 2G and the second 200M autoextend. Looking at the impdp log, I see it isn't real strong about sorting by size, especially under 5M. So where did the 20M table it imported first end up? At the end of the auotextend file, with lots of free space below a few tables there. The 2G file seems to have a couple thousand blocks used, then 8K blocks free, 5K blocks used, 56K blocks free, 19K blocks used, 148K free (with a few tables scattered in the middle of there), 4K blocks used, the rest free. Looking at an 8G similar schema, looks like the largest files got spread across the middle of the files, then the second largest next to it, and so forth, which is more what I expected.
I'm still not going to worry about it. Data distribution within the tables is something that might be important, where blocks on the disk are, not so much. I think that's why the docs are kind of ambiguous about the algorithm, it can change, and isn't all that important, unless you run into bugs. -
Split TempDB Data file into multiple files
Hey ,
I have been seeing TempDB contention in memory on our SQL server 2012 Enterprise Edition with SP2 and I need to split TempDB Data file into multiple files .
Could someone please help me to verify the following information:
1]
We are on SQL server 2012 Enterprise Edition with service pack2 but as per SQL Server 2012 Enterprise Edition under CAL Licensing –We are limited to use 20 logical processors instead 40 logical processors. Our SQL is configured
on NUMA nodes and with the limitation SQL uses only 2 NUMA nodes on live .There are 10 logical CPUs are evenly assigned to each NUMA nodes. Microsoft recommends that if SQL server configured on NUMA node and we have 2 NUMA nodes, then we may add two data files
at a time. Please let me know should I add two TempDB data file at a time?
2] We have TempDB Data and log files both on the same Drive of SQL server .When I split TempDB into two Data files, I can get them on the same Drive .What your recommendation should I need to create TempDB Data files on the same drive or on separate
disks?
3] What would be the blackout plan for splitting the tempdb into multiple files? Please let me know if someone has a better back out plan ?
1] Run script that create tempdb Database with a single file
2] Reboot SQL service in order to apply change
Your help will be apprecited .
Thanks ,
DaizyTom , I am seeing TempDB contention on Production server when there is a heavily load on sql server . We also experiencing the overall system slowness.Please look at Pagelatch wait statistics on our server ,Please advise .
wait_type
waiting_tasks_count
wait_time_ms
max_wait_time_ms
signal_wait_time_ms
PAGELATCH_UP
2680948
3609142
10500
508214
PAGELATCH_SH
1142213
1338451
8609
324538
PAGELATCH_NL
0
0
0
0
PAGELATCH_KP
0
0
0
0
PAGELATCH_EX
44852435
7798192
9886
6108374
PAGELATCH_DT
0
0
0
0
Thanks ,
Daizy -
Splitting TempDB into multiple data files.
To avoid contention we have to split tempdb into multiple data files. But as for case suppose, there is 20 GB total space is on the drive containing 1 tempdb data file of 15 GB. And I have to create 3 more tempdb data files, and as recommendation all files
should be of same size.Then how to handle this situation and configure all data files with same size?
Pranshul GuptaBut as for case suppose, there is 20 GB total space is on the drive containing 1 tempdb data file of 15 GB. And I have to create 3 more tempdb data files, and as recommendation all files should be of same size.Then how to handle this situation and configure
all data files with same size?
So your goal is to have 4 tempdb files, each 5GB? Below is a sample script to accomplish the task within the 20GB space constraint.
--reduce size of existing file to 5GB
ALTER DATABASE tempdb
MODIFY FILE (NAME='tempdev', Size=5GB);
DBCC SHRINKFILE('tempdev',5120);
--add 3 new 5GB files
ALTER DATABASE tempdb
ADD FILE (NAME='tempdev2', FILENAME='D:\SqlDataFiles\tempdb2.ndf', Size=5GB);
ALTER DATABASE tempdb
ADD FILE (NAME='tempdev3', FILENAME='D:\SqlDataFiles\tempdb3.ndf', Size=5GB);
ALTER DATABASE tempdb
ADD FILE (NAME='tempdev4', FILENAME='D:\SqlDataFiles\tempdb4.ndf', Size=5GB);
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
hello everyone,,,
is there any option to send multiple data files / music files using bluetooth / whatsapp / emails...
as option to select multiple files using "SELECT in menu option / left aA"+ scroll tracepad" is availble with pictures only.
and while receiving files via bluetooth i'm unable to do any other activity.One at time, via Bluetooth.
1. If any post helps you please click the below the post(s) that helped you.
2. Please resolve your thread by marking the post "Solution?" which solved it for you!
3. Install free BlackBerry Protect today for backups of contacts and data.
4. Guide to Unlocking your BlackBerry & Unlock Codes
Join our BBM Channels (Beta)
BlackBerry Support Forums Channel
PIN: C0001B7B4 Display/Scan Bar Code
Knowledge Base Updates
PIN: C0005A9AA Display/Scan Bar Code -
How to load unicode data files with fixed records lengths?
Hi!
To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
Alternative 1: one record per row
SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode.dat
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001111112234444
01NormalDExZWEI
02ÄÜÖßêÊûÛxöööö
03ÄÜÖßêÊûÛxöööö
04üüüüüüÖÄxµôÔµ Alternative2: variable length records
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode_var.dat "VAR 4"
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
Implementing these two alternatives in OWB, I encounter the following problems:
* How to specify LENGTH SEMANTICS CHAR?
* How to suppress the POSITION definition?
* How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
Or is there another way that can be implemented using OWB?
Any help is appreciated!
Thanks,
Carsten.Hi Carsten
If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
Cheers
David -
How to load a XML file into a table
Hi,
I've been working on Oracle for many years but for the first time I was asked to load a XML file into a table.
As an example, I've found this on the web, but it doesn't work
Can someone tell me why? I hoped this example could help me.
the file acct.xml is this:
<?xml version="1.0"?>
<ACCOUNT_HEADER_ACK>
<HEADER>
<STATUS_CODE>100</STATUS_CODE>
<STATUS_REMARKS>check</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>2</SEGMENT_NUMBER>
<REMARKS>rp polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>3</SEGMENT_NUMBER>
<REMARKS>rp polytechnic administration</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>4</SEGMENT_NUMBER>
<REMARKS>rp polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>5</SEGMENT_NUMBER>
<REMARKS>rp polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
<HEADER>
<STATUS_CODE>500</STATUS_CODE>
<STATUS_REMARKS>process exception</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>20</SEGMENT_NUMBER>
<REMARKS> base polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>30</SEGMENT_NUMBER>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>40</SEGMENT_NUMBER>
<REMARKS> base polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>50</SEGMENT_NUMBER>
<REMARKS> base polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
</ACCOUNT_HEADER_ACK>
For the two tags HEADER and DETAILS I have the table:
create table xxrp_acct_details(
status_code number,
status_remarks varchar2(100),
segment_number number,
remarks varchar2(100)
before I've created a
create directory test_dir as 'c:\esterno'; -- where I have my acct.xml
and after, can you give me a script for loading data by using XMLTABLE?
I've tried this but it doesn't work:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
This should allow me to get something like this:
select * from xxrp_acct_details;
Statuscode status remarks segement remarks
100 check 2 rp polytechnic
100 check 3 rp polytechnic administration
100 check 4 rp polytechnic finance
100 check 5 rp polytechnic logistics
500 process exception 20 base polytechnic
500 process exception 30
500 process exception 40 base polytechnic finance
500 process exception 50 base polytechnic logistics
but I get:
Error report:
ORA-06550: line 19, column 11:
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
ORA-06550: line 4, column 2:
PL/SQL: SQL Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
and if I try to change the script without using the column HEADER_NO to keep track of the header rank inside the document:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
I get this message:
Error report:
ORA-19114: error during parsing the XQuery expression:
ORA-06550: line 1, column 13:
PLS-00201: identifier 'SYS.DBMS_XQUERYINT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at line 4
19114. 00000 - "error during parsing the XQuery expression: %s"
*Cause: An error occurred during the parsing of the XQuery expression.
*Action: Check the detailed error message for the possible causes.
My oracle version is 10gR2 Express Edition
I do need a script for loading xml files into a table as soon as possible, Give me please a simple example for understanding and that works on 10gR2 Express Edition
Thanks in advance!The reason your first SQL statement
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
returns the error you noticed
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
is because Oracle is expecting XML to be passed in. At the moment I forget if it requires a certain format or not, but it is simply expecting the value to be wrapped in simple XML.
Your query actually runs as is on 11.1 as Oracle changed how that functionality worked when 11.1 was released. Your query runs slowly, but it does run.
As you are dealing with groups, is there any way the input XML can be modified to be like
<ACCOUNT_HEADER_ACK>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
</ACCOUNT_HEADER_ACK>
so that it is easier to associate a HEADER/DETAILS combination? If so, it would make parsing the XML much easier.
Assuming the answer is no, here is one hack to accomplish your goal
select x1.status_code,
x1.status_remarks,
x3.segment_number,
x3.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc as "d",
columns detail_no for ordinality,
detail_xml xmltype path 'DETAIL'
) x2,
xmltable(
'DETAIL'
passing x2.detail_xml
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS') x3
WHERE x1.header_no = x2.detail_no;
This follows the approach you started with. Table x1 creates a row for each HEADER node and table x2 creates a row for each DETAILS node. It assumes there is always a one and only one association between the two. I use table x3, which is joined to x2, to parse the many DETAIL nodes. The WHERE clause then joins each header row to the corresponding details row and produces the eight rows you are seeking.
There is another approach that I know of, and that would be using XQuery within the XMLTable. It should require using only one XMLTable but I would have to spend some time coming up with that solution and I can't recall whether restrictions exist in 10gR2 Express Edition compared to what can run in 10.2 Enterprise Edition for XQuery. -
"how to load a text file to oracle table"
hi to all
can anybody help me "how to load a text file to oracle table", this is first time i am doing, plz give me steps.
Regards
MKhaleelUsage: SQLLOAD keyword=value [,keyword=value,...]
Valid Keywords:
userid -- ORACLE username/password
control -- Control file name
log -- Log file name
bad -- Bad file name
data -- Data file name
discard -- Discard file name
discardmax -- Number of discards to allow (Default all)
skip -- Number of logical records to skip (Default 0)
load -- Number of logical records to load (Default all)
errors -- Number of errors to allow (Default 50)
rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all)
bindsize -- Size of conventional path bind array in bytes (Default 256000)
silent -- Suppress messages during run (header, feedback, errors, discards, partitions)
direct -- use direct path (Default FALSE)
parfile -- parameter file: name of file that contains parameter specifications
parallel -- do parallel load (Default FALSE)
file -- File to allocate extents from
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
readsize -- Size of Read buffer (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
(Default NOT_USED)
columnarrayrows -- Number of rows for direct path column array (Default 5000)
streamsize -- Size of direct path stream buffer in bytes (Default 256000)
multithreading -- use multithreading in direct path
resumable -- enable or disable resumable for current session (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct.
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\PFS2004.CTL LOG=D:\PFS2004.LOG BAD=D:\PFS2004.BAD DATA=D:\PFS2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\CLAB2004.CTL LOG=D:\CLAB2004.LOG BAD=D:\CLAB2004.BAD DATA=D:\CLAB2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CTL LOG=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.LOG BAD=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.BAD DATA=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CSV -
How to load duplicate data to a temporary table in ssis
i have duplicate data in my table.i want to load unique records in one destination .and i want to load duplicate data in a temporary table in another destination. .how can we impliment package for this
Hi V60,
To achieve your goal, you can use the following two approaches:
Use Script Component to redirect the duplicate rows.
Use Fuzzy Grouping Transformation which performs data cleaning tasks by identifying rows of data that are likely to be duplicates and selecting a canonical row of data to use in standardizing the data. Then, use a Conditional Split Transform to redirect
the unique rows and the duplicate rows to different destinations.
For the step-by-step guidance about the above two methods, walk through the following blogs:
http://microsoft-ssis.blogspot.in/2011/12/redirect-duplicate-rows.html
http://hussain-msbi.blogspot.in/2013/02/redirect-duplicate-rows-using-ssis-step.html
Regards,
Mike Yin
TechNet Community Support -
Loading an XML file into the table without creating a directory .
Hi,
I wanted to load an XML file into a table column . But I should not create a directory in the server and placing the XML file there and giving the path in the insert query. Can anybody help me here?
Thanks in advance.You could write a java stored procedure that retrieves the file into a clob. Wrap that in a function call and use it in your insert statement.
This solution require read privileges granted by sys and is therefore only feasible if the top-level directory/directories are known or you get read-access to everything.
Maybe you are looking for
-
I've got a 4870 card and connected the DVI port to a 27 inch Samsung and the mini-display port via the $99 mini-display to dual link DVI adaptor to a 23 inch Samsung. This set-up worked fine with 2 of the 23 inch Samsung, but now it is only detecting
-
I cannot re-install Adobe Acrobat Pro XI
I have faced a problem and then uninstalled the Adobe Acrobat Pro XI. Then I tried to re-install it but the Adobe Download Manager always showing the program is 'up to date', there's no 'Install' button as before, so I didn't know how to re-install i
-
How to get audio clips to play at site but not actually download?
Would like person to be able to listen only to audio clips but not to be able to download clip to add to their iTunes library. In other words, am selling an audio product, and while I'd like people to be able to hear samples at my site (built with iw
-
hello Friends , I read some disappointed news on this forum that sun would not continue support for intel edition i donot know it is true of false Any tell me what is exact matter on following email :- [email protected] or reply this letter
-
Creating an offline playlist for flash videos
Hi I have several flash videos that I want to create a play list for, can I do this? The videos are located on a PC HD. I have 3 files, Player.exe, video.flv and text.txt Cheers Eugene