Sql*loader map multiple files to multiple tables
Can a single control file map multiple files to multiple different tables? If so, what does the syntax look like? I've tried variations of the following, but haven't hit the jackpot yet.
Also, I understand that a direct load will automatically turn off most constraint checking. I'd like to turn this back on when I'm done loading all tables. How/when do I do that? I can find multiple references to 'REENABLE DISABLED CONSTRAINTS', but I don't know where to say that.
TIA.
LOAD DATA
INFILE 'first.csv'
TRUNCATE
INTO TABLE first_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(a,b,c)
INFILE 'second.csv'
TRUNCATE
INTO TABLE second_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(x,y,z,xx,yy,zz)
etc.
Here you go what you want..
http://www.psoug.org/reference/sqlloader.html
LOAD DATA
INFILE 'c:\temp\demo09a.dat'
INFILE 'c:\temp\demo09b.dat'
APPEND
INTO TABLE denver_prj
WHEN projno = '101' (
projno position(1:3) CHAR,
empno position(4:8) INTEGER EXTERNAL,
projhrs position(9:10) INTEGER EXTERNAL)
INTO TABLE orlando_prj
WHEN projno = '202' (
projno position(1:3) CHAR,
empno position(4:8) INTEGER EXTERNAL,
projhrs position(9:10) INTEGER EXTERNAL)
INTO TABLE misc_prj
WHEN projno != '101' AND projno != '202' (
projno position(1:3) CHAR,
empno position(4:8) INTEGER EXTERNAL,
projhrs position(9:10) INTEGER EXTERNAL)
Thanks
Aravindh
Similar Messages
-
Loading data from multiple files to multiple tables
How should I approach on creating SSIS package to load data from multiple files to multiple tables. Also, Files will have data which might overlap so I might have to create stored procedure for it. Ex. 1st day file -data from au.1 - aug 10 and 2nd day
file might have data from aug.5 to aug 15. So I might have to look for max and min date and truncate table with in that date range.thats ok. ForEachLoop would be able to iterate through the files. You can declare a variable inside loop to capture the filenames. Choose fully qualified as the option in loop
Then inside loop
1. Add execute sql task to delete overlapping data from the table. One question here is where will you get date from? Does it come inside filename?
2. Add a data flow task with file source pointing to file .For this add a suitable connection manager (Excel/Flat file etc) and map the connection string property to filename variable using expressions
3. Add a OLEDB Destination to point to table. You can use table or view from variable - fast load option and map to variable to make tablename dynamic and just set corresponding value for the variable to get correct tablename
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
How to define mapping from multiple files to Oracle Tables in 9i
Around 100-200 Flat files are created every 30 minutes and each filename is different - Filename has datetime Stamp as part of the file name apart from the product code as first 12 characters.
Can anyone guide me in How to define mappings to these files using OWB ?
What I can do is consolidate all files into one known single file name and map the files to Oracle tables which I don't want to do because I need to reject errorneous files.
Can anyone provide me some tips on this ?
Thanks in Advance.
Sohan.As you know, in OWB you need to define the flat file source in a 'static' way (name, location, etc. have to be defined previously), so you cannot deal directly with dinamically generated file names. One solution would be to consolidate them into a single file (which you can define statically in OWB), but prefix every record with the filename. In this way it is easy to understand from which file the rejected records came from. If you are using unix, it is very easy to write a script to do this. Something like this will do:
awk '{printf "%s,%s\n",FILENAME,$1}' yourfilename >> onefile
where yourfile is the name of the file you are currently processing, while onefile is the name of the consolidated file. You can run this for all files in your directory by substituting yourfilename with * .
You can then disregard the file name field in OWB, while processing the rejected records based on the file name prefix by using unix utilities like grep and similar.
Regards:
Igor -
How to load a XML file into a table
Hi,
I've been working on Oracle for many years but for the first time I was asked to load a XML file into a table.
As an example, I've found this on the web, but it doesn't work
Can someone tell me why? I hoped this example could help me.
the file acct.xml is this:
<?xml version="1.0"?>
<ACCOUNT_HEADER_ACK>
<HEADER>
<STATUS_CODE>100</STATUS_CODE>
<STATUS_REMARKS>check</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>2</SEGMENT_NUMBER>
<REMARKS>rp polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>3</SEGMENT_NUMBER>
<REMARKS>rp polytechnic administration</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>4</SEGMENT_NUMBER>
<REMARKS>rp polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>5</SEGMENT_NUMBER>
<REMARKS>rp polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
<HEADER>
<STATUS_CODE>500</STATUS_CODE>
<STATUS_REMARKS>process exception</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>20</SEGMENT_NUMBER>
<REMARKS> base polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>30</SEGMENT_NUMBER>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>40</SEGMENT_NUMBER>
<REMARKS> base polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>50</SEGMENT_NUMBER>
<REMARKS> base polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
</ACCOUNT_HEADER_ACK>
For the two tags HEADER and DETAILS I have the table:
create table xxrp_acct_details(
status_code number,
status_remarks varchar2(100),
segment_number number,
remarks varchar2(100)
before I've created a
create directory test_dir as 'c:\esterno'; -- where I have my acct.xml
and after, can you give me a script for loading data by using XMLTABLE?
I've tried this but it doesn't work:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
This should allow me to get something like this:
select * from xxrp_acct_details;
Statuscode status remarks segement remarks
100 check 2 rp polytechnic
100 check 3 rp polytechnic administration
100 check 4 rp polytechnic finance
100 check 5 rp polytechnic logistics
500 process exception 20 base polytechnic
500 process exception 30
500 process exception 40 base polytechnic finance
500 process exception 50 base polytechnic logistics
but I get:
Error report:
ORA-06550: line 19, column 11:
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
ORA-06550: line 4, column 2:
PL/SQL: SQL Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
and if I try to change the script without using the column HEADER_NO to keep track of the header rank inside the document:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
I get this message:
Error report:
ORA-19114: error during parsing the XQuery expression:
ORA-06550: line 1, column 13:
PLS-00201: identifier 'SYS.DBMS_XQUERYINT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at line 4
19114. 00000 - "error during parsing the XQuery expression: %s"
*Cause: An error occurred during the parsing of the XQuery expression.
*Action: Check the detailed error message for the possible causes.
My oracle version is 10gR2 Express Edition
I do need a script for loading xml files into a table as soon as possible, Give me please a simple example for understanding and that works on 10gR2 Express Edition
Thanks in advance!The reason your first SQL statement
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
returns the error you noticed
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
is because Oracle is expecting XML to be passed in. At the moment I forget if it requires a certain format or not, but it is simply expecting the value to be wrapped in simple XML.
Your query actually runs as is on 11.1 as Oracle changed how that functionality worked when 11.1 was released. Your query runs slowly, but it does run.
As you are dealing with groups, is there any way the input XML can be modified to be like
<ACCOUNT_HEADER_ACK>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
</ACCOUNT_HEADER_ACK>
so that it is easier to associate a HEADER/DETAILS combination? If so, it would make parsing the XML much easier.
Assuming the answer is no, here is one hack to accomplish your goal
select x1.status_code,
x1.status_remarks,
x3.segment_number,
x3.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc as "d",
columns detail_no for ordinality,
detail_xml xmltype path 'DETAIL'
) x2,
xmltable(
'DETAIL'
passing x2.detail_xml
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS') x3
WHERE x1.header_no = x2.detail_no;
This follows the approach you started with. Table x1 creates a row for each HEADER node and table x2 creates a row for each DETAILS node. It assumes there is always a one and only one association between the two. I use table x3, which is joined to x2, to parse the many DETAIL nodes. The WHERE clause then joins each header row to the corresponding details row and produces the eight rows you are seeking.
There is another approach that I know of, and that would be using XQuery within the XMLTable. It should require using only one XMLTable but I would have to spend some time coming up with that solution and I can't recall whether restrictions exist in 10gR2 Express Edition compared to what can run in 10.2 Enterprise Edition for XQuery. -
"how to load a text file to oracle table"
hi to all
can anybody help me "how to load a text file to oracle table", this is first time i am doing, plz give me steps.
Regards
MKhaleelUsage: SQLLOAD keyword=value [,keyword=value,...]
Valid Keywords:
userid -- ORACLE username/password
control -- Control file name
log -- Log file name
bad -- Bad file name
data -- Data file name
discard -- Discard file name
discardmax -- Number of discards to allow (Default all)
skip -- Number of logical records to skip (Default 0)
load -- Number of logical records to load (Default all)
errors -- Number of errors to allow (Default 50)
rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all)
bindsize -- Size of conventional path bind array in bytes (Default 256000)
silent -- Suppress messages during run (header, feedback, errors, discards, partitions)
direct -- use direct path (Default FALSE)
parfile -- parameter file: name of file that contains parameter specifications
parallel -- do parallel load (Default FALSE)
file -- File to allocate extents from
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
readsize -- Size of Read buffer (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
(Default NOT_USED)
columnarrayrows -- Number of rows for direct path column array (Default 5000)
streamsize -- Size of direct path stream buffer in bytes (Default 256000)
multithreading -- use multithreading in direct path
resumable -- enable or disable resumable for current session (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct.
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\PFS2004.CTL LOG=D:\PFS2004.LOG BAD=D:\PFS2004.BAD DATA=D:\PFS2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\CLAB2004.CTL LOG=D:\CLAB2004.LOG BAD=D:\CLAB2004.BAD DATA=D:\CLAB2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CTL LOG=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.LOG BAD=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.BAD DATA=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CSV -
SQL*Loader Sequential Data File Record Processing?
If I use the conventional path will SQL*Loader process a data file sequentially from top to bottom? I have a file comprised of header and detail records with no value found in the detail records that can be used to relate to the header records. The only option is to derive a header value via a sequence (nextval) and then populate the detail records with the same value pulled from the same sequence (currval). But for this to work SQL*Loader must process the file in the exact same sequence that the data has been written to the data file. I've read through the 11g Oracle® Database Utilities SQL*Loader sections looking for proof that this is what will happen but haven't found this information and I don't want to assume that SQL*Loader will always process the data file records sequentially.
Thank youOracle Support responded with the following statement.
"Yes, SQL*LOADER process data file from top to bottom.
This was touched in the note below:
SQL*Loader - How to Load a Single Logical Record from Physical Records which Include Linefeeds (Doc ID 160093.1)"
Jason -
Un-'Locking' multiple files in multiple folders....
So I just spent 2 hours at the 'Genius' bar manually 'unlocking' hundreds if not thousands of photos in my 'iPhoto' (05) library in order to upgrade to iPhoto '06... apparently, when I imported some pictures from Windows... it brought them in as 'locked' (i.e. when you 'get info...' on a file, and it has the 'Locked' check-box checked), well apparently, in order to upgrade to iPhoto '06, you have to 'unlock' every single file... given the way that iPhoto stores your files, if you've got a few locked, scattered throughout your library, it's going to take a LONG time to find them all and unlock them.
My question for the Mac OS X specialists at large is, "Is there a way to un-lock multiple files in multiple directories and folders, WITHOUT doing it manually? I can't believe I actually sat at a genius bar and did this for TWO hours. According to my Genius, that was the only way to do it. Are there any other Genius' out there that may have a differing opinion? Keep in mind, he was one of 2 or 3 native english speakers (here in the very busy Shibuya, Tokyo store) any solutions would be appreciated, because it appears I've got multiple files in my MacBook iPhoto library that are 'locked' and I'd like to find an easier way to unlock them.
FYI, we went to finder and searched for 'other' "Files Write Protected" etc., but we were never able to find ONLY files that are locked... is there a better way? Surely there has to be. Looking forward to learning something new.
Hal W.
Tokyo, Japan
Mac Mini, MacBook 13.3" Mac OS X (10.4.7)There's also a Terminal command that will work:
Launch Terminal from your utilities folder, and enter this command:
find /Users/yourname/Pictures/"iPhoto Library"/ -flags uchg -exec chflags nouchg {} \;
Be sure you get the "spaces" right, including the ones before and after the curly braces--{}--and there is no space between \; and it should work just fine. You may want to just copy and paste the above into a text program, fill in your short user name, then copy and paste into Terminal. And it must be all one line. After you've entered the command, hit the Return key to execute it. It will look in your iPhoto Library folders for all files that have the locked flag, then change the flag to unlocked.
Francine
Francine
Schwieder -
Execute SQL*Loader mapping
Hi all,
I'm trying to execute a deployed OWB SQL*Loader mapping, using the oem_exec_template.sql script. I've got the following error:
Stage 1: Decoding Parameters
| location_name=ORA_LOC_DWH
| task_type=SQLLoader
| task_name=MAP_SA_AGGK_FEVO
Stage 2: Opening Task
declare
ERROR at line 1:
ORA-20001: Task not found - Please check the Task Type, Name and Location are
correct.
ORA-06512: at line 268
I can execute the mapping out of the OWB client and I'm also have no problems to execute a PLSQL mapping via that script.
Did anybody use this script for a SQL*Loader mapping before?
Regards UweHi Jean-Perre,
the names of the location and the mapping should be OK. Only the mapping STEP_TYPE seems to be different (UPPERCASE) to the one which is used inside of your script.
OMB+> OMBRETRIEVE ORACLE_MODULE 'ORA_DWH_SA' GET REF LOCATION
ORA_LOC_DWH
OMB+> OMBCC 'ORA_DWH_SA'
Context changed.
OMB+> OMBLIST MAPPINGS
MAP_SA_AGGK_FEVO MAP_SA_AGGK_KK_KONTO MAP_SA_AGGK_KK_KUNDE MAP_SA_BCV_YT
OMB+> OMBRETRIEVE MAPPING 'MAP_SA_AGGK_FEVO' GET PROPERTIES (STEP_TYPE)
SQLLOADER
The mapping is deployed, otherwise i couldn't execute the mapping out of the OWB client.
Regards Uwe -
Loading an XML file into the table without creating a directory .
Hi,
I wanted to load an XML file into a table column . But I should not create a directory in the server and placing the XML file there and giving the path in the insert query. Can anybody help me here?
Thanks in advance.You could write a java stored procedure that retrieves the file into a clob. Wrap that in a function call and use it in your insert statement.
This solution require read privileges granted by sys and is therefore only feasible if the top-level directory/directories are known or you get read-access to everything. -
Mapping and loading single ASCII file into multiple tables in ODI
We get an ASCII file that contains several different transactions (records) and I need to validate and map each record to different table in the target database using Oracle Data Integrator tool. Is it possible ? If so, how and how difficult it is ?
I would appreciate a quick response.
Thanks,
RamHi Madha,
Using Demo version, we are trying to load data from ASCII file. When trying to execute, we are getting the following error:
7000 : null : com.sunopsis.jdbc.driver.file.a.icom.sunopsis.jdbc.driver.file.a.i
at com.sunopsis.jdbc.driver.file.a.f.getColumnClassName(f,java)
at com.sunopsis.sql.e.a(e.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
The file has all text fields and no date fields. we made sure that there is data for all fields being loaded.
Question is whether there is any problem in creating data in the Demo database which cameup with installation?
I mean, do we need to create any privileges to insert/add/delete ? We are running this as user SUPVERVISOR .
I appreciate if you can respond to this.
thanks,
Ram -
Using SQL*Loader to load a .csv file having multiple CLOBs
Oracle 8.1.5 on Solaris 2.6
I want to use SQL*Loader to load a .CSV file that has 4 inline CLOB columns. I shall attempt to give some information about the problem:
1. The CLOBs are not delimited in the field level and could themselves contain commas.
2. I cannot get the data file in any other format.
Can anybody help me out with this? While loading LOB in predetermined size fields, is there a limit on the size?
TIA.
-MuraliThanks for the article link. The article states "...the loader can load only XMLType tables, not columns." Is this still the case with 10g R2? If so, what is the best way to workaround this problem? I am migrating data from a Sybase table that contains a TEXT column (among others) to an Oracle table that contains an XMLType column. How do you recommend I accomplish this task?
- Ron -
SQL Loader Inserting Log File Statistics to a table
Hello.
I'm contemplating how to approach gathering the statistics from the SQL Loader log file to insert them into a table. I've approached this from a Korn Shell Script perspective previously, but now that I'm working in a Windows environment and my peers aren't keen about batch files and scripting I thought I'd attempt to use SQL Loader itself to read the log file and insert one or more records into a table that tracks data uploads. Has anyone created a control file that accomplishes this?
My current environment:
Windows 2003 Server
SQL*Loader: Release 10.2.0.1.0
Thanks,
LukeHello.
Learned a little about inserting into multiple tables with delimited records. Here is my current tested control file:
LOAD DATA
APPEND
INTO TABLE upload_log
WHEN (1:12) = 'SQL*Loader: '
FIELDS TERMINATED BY WHITESPACE
TRAILING NULLCOLS
( upload_log_id RECNUM
, filler_field_0 FILLER
, filler_field_1 FILLER
, filler_field_2 FILLER
, filler_field_3 FILLER
, filler_field_4 FILLER
, filler_field_5 FILLER
, day_of_week
, month
, day_of_month
, time_of_day
, year
, log_started_on "TO_DATE((:month ||' '|| :day_of_month ||' '|| :time_of_day ||' '|| :year), 'Mon DD HH24:MI:SS YYYY')"
INTO TABLE upload_log
WHEN (1:11) = 'Data File: '
FIELDS TERMINATED BY ':'
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, input_file_name "TRIM(:input_file_name)"
INTO TABLE upload_log
WHEN (1:6) = 'Table '
FIELDS TERMINATED BY WHITESPACE
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, table_name "RTRIM(:table_name, ',')"
INTO TABLE upload_rejects
WHEN (1:7) = 'Record '
FIELDS TERMINATED BY ':'
( upload_rejects_id RECNUM
, record_number POSITION(1) "TO_NUMBER(SUBSTR(:record_number,8,20))"
, reason
INTO TABLE upload_rejects
WHEN (1:4) = 'ORA-'
FIELDS TERMINATED BY ':'
( upload_rejects_id RECNUM
, error_code POSITION(1)
, error_desc
INTO TABLE upload_log
WHEN (1:22) = 'Total logical records '
FIELDS TERMINATED BY WHITESPACE
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, filler_field_1 FILLER
, filler_field_2 FILLER
, action "RTRIM(:action, ':')"
, number_of_records
INTO TABLE upload_log
WHEN (1:13) = 'Run began on '
FIELDS TERMINATED BY WHITESPACE
TRAILING NULLCOLS
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, filler_field_1 FILLER
, filler_field_2 FILLER
, day_of_week
, month
, day_of_month
, time_of_day
, year
, run_began_on "TO_DATE((:month ||' '|| :day_of_month ||' '|| :time_of_day ||' '|| :year), 'Mon DD HH24:MI:SS YYYY')"
INTO TABLE upload_log
WHEN (1:13) = 'Run ended on '
FIELDS TERMINATED BY WHITESPACE
TRAILING NULLCOLS
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, filler_field_1 FILLER
, filler_field_2 FILLER
, day_of_week
, month
, day_of_month
, time_of_day
, year
, run_ended_on "TO_DATE((:month ||' '|| :day_of_month ||' '|| :time_of_day ||' '|| :year), 'Mon DD HH24:MI:SS YYYY')"
INTO TABLE upload_log
WHEN (1:18) = 'Elapsed time was: '
FIELDS TERMINATED BY ':'
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, filler_field_1 FILLER
, filler_field_2 FILLER
, elapsed_time
INTO TABLE upload_log
WHEN (1:14) = 'CPU time was: '
FIELDS TERMINATED BY ':'
( upload_log_id RECNUM
, filler_field_0 FILLER POSITION(1)
, filler_field_1 FILLER
, filler_field_2 FILLER
, cpu_time
)Here are the basic table create scripts:
TRUNCATE TABLE upload_log;
DROP TABLE upload_log;
CREATE TABLE upload_log
( upload_log_id INTEGER
, day_of_week VARCHAR2( 3)
, month VARCHAR2( 3)
, day_of_month INTEGER
, time_of_day VARCHAR2( 8)
, year INTEGER
, log_started_on DATE
, input_file_name VARCHAR2(255)
, table_name VARCHAR2( 30)
, action VARCHAR2( 10)
, number_of_records INTEGER
, run_began_on DATE
, run_ended_on DATE
, elapsed_time VARCHAR2( 8)
, cpu_time VARCHAR2( 8)
TRUNCATE TABLE upload_rejects;
DROP TABLE upload_rejects;
CREATE TABLE upload_rejects
( upload_rejects_id INTEGER
, record_number INTEGER
, reason VARCHAR2(255)
, error_code VARCHAR2( 9)
, error_desc VARCHAR2(255)
);Now, if I could only insert a single record to the upload_log table (per table logged); adding separate columns for skipped, read, rejected, discarded quantities. Any advice on how to use SQL Loader to do this (writing a procedure would be fairly simple, but I'd like to perform all of the work in one place if at all possible)?
Thanks,
Luke
Edited by: Luke Mackey on Nov 12, 2009 4:28 PM -
How to load a XML file into a table using PL/SQL
Hi Guru,
I have a requirement, that i have to create a procedure or a package in PL/SQL to load XML file into a table.
How we can achive this.ODI_NewUser wrote:
Hi Guru,
I have a requirement, that i have to create a procedure or a package in PL/SQL to load XML file into a table.
How we can achive this.
Not a perfectly framed question. How do you want to load the XML file? Hoping you want to parse the xml file and load it into a table you can do this.
This is the xml file
karthick% cat emp_details.xml
<?xml version="1.0"?>
<ROWSET>
<ROW>
<EMPNO>7782</EMPNO>
<ENAME>CLARK</ENAME>
<JOB>MANAGER</JOB>
<MGR>7839</MGR>
<HIREDATE>09-JUN-1981</HIREDATE>
<SAL>2450</SAL>
<COM>0</COM>
<DEPTNO>10</DEPTNO>
</ROW>
<ROW>
<EMPNO>7839</EMPNO>
<ENAME>KING</ENAME>
<JOB>PRESIDENT</JOB>
<HIREDATE>17-NOV-1981</HIREDATE>
<SAL>5000</SAL>
<COM>0</COM>
<DEPTNO>10</DEPTNO>
</ROW>
</ROWSET>
You can write a query like this.
SQL> select *
2 from xmltable
3 (
4 '/ROWSET/ROW' passing xmltype
5 (
6 bfilename('SDAARBORDIRLOG', 'emp_details.xml')
7 , nls_charset_id('AL32UTF8')
8 )
9 columns empno number path 'EMPNO'
10 , ename varchar2(6) path 'ENAME'
11 , job varchar2(9) path 'JOB'
12 , mgr number path 'MGR'
13 , hiredate varchar2(20)path 'HIREDATE'
14 , sal number path 'SAL'
15 , com number path 'COM'
16 , deptno number path 'DEPTNO'
17 );
EMPNO ENAME JOB MGR HIREDATE SAL COM DEPTNO
7782 CLARK MANAGER 7839 09-JUN-1981 2450 0 10
7839 KING PRESIDENT 17-NOV-1981 5000 0 10
SQL> -
How can I use sql loader to load a text file into a table
Hi, I need to load a text file that has records on lines tab delimited into a table. How would I be able to use the sql loader to do this? I am using korn shell to do this. I am very new at this...so any kind of helpful examples or documentation would be very appreciated. I would love to see some examples to help me understand if possible. I need help! Thanks alot!
You should check out the documentation on SQL*Loader in the online Oracle document titled Utilities. Here's a link to the 9iR2 version of it: http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96652/part2.htm#436160
Hope this helps. -
Loading multiple files from multiple users.
Our system is moving from a standalone app to a web system. The users will have export files generated by our app which they will need to import up to the web. In the web system, the users are connecting via SSO, so apps server is using a single JDBC connection and we are querying the CLIENT_IDENTIFIER at the database end to see who is doing what.
The export file is essentially a zip with the first file being a list of which filenames in the zip translate to what tables in the database they are from.
The new system will require a little work on each file to update certain things prior to actually inserting the data to its final destination.
My confusion is how to best do this. What we essentially need to do is move the data from the text file into a table along with some flag for identifying the user that put it there. Then update the data as needed and finally insert it to the final table destination. The first thought was to use external tables. However if you have two users importing at the same time, how do you differentiate the data? The other idea was to use sqlldr. The trouble was there is no way (that I'm aware of) to be able to add the flag for who's data this is on the way over with sqlldr, it will only bulk copy the data from the file over to the table you specify.
So the basic question is how do I get data for a single table into the system when I have multiple users (SSO signed on using the same DB connection via apps) uploading their own copies of data ultimately headed for the same database table but the data needs a little modification on the way? What's the best way to do this?
Thanks.Our system is moving from a standalone app to a web system. The users will have export files generated by our app which they will need to import up to the web. In the web system, the users are connecting via SSO, so apps server is using a single JDBC connection and we are querying the CLIENT_IDENTIFIER at the database end to see who is doing what.
The export file is essentially a zip with the first file being a list of which filenames in the zip translate to what tables in the database they are from.
The new system will require a little work on each file to update certain things prior to actually inserting the data to its final destination.
My confusion is how to best do this. What we essentially need to do is move the data from the text file into a table along with some flag for identifying the user that put it there. Then update the data as needed and finally insert it to the final table destination. The first thought was to use external tables. However if you have two users importing at the same time, how do you differentiate the data? The other idea was to use sqlldr. The trouble was there is no way (that I'm aware of) to be able to add the flag for who's data this is on the way over with sqlldr, it will only bulk copy the data from the file over to the table you specify.
So the basic question is how do I get data for a single table into the system when I have multiple users (SSO signed on using the same DB connection via apps) uploading their own copies of data ultimately headed for the same database table but the data needs a little modification on the way? What's the best way to do this?
Thanks.
Maybe you are looking for
-
Creative Cloud is a blank white screen, can't download any apps, how do I fix?
Hello all, PLEASE HELP!! I downloaded Ai as a trial version, then purchased CS6 on a 12 month. Now I'm trying to download other apps (tried Photoshop, Flash and Premiere Pro), however when I click 'download' to try and get the other apps Creative Clo
-
Trying to set up printer sharing using WRT54G2
After reading the forum I have not been able to find a solution or very dated answers so sorry if this is already answered... ConfigurationL - Cable modem as internet access point - Cisco WRT54G2 (firmware 1.0.04) behind the cable modem to host my
-
I've just brought some video from itune, but realised the I should have brought the SD version instead of HD version as my iPad didn't have enough storage. Can I change that from HD to SD? Does it mean that I can't watch it if I cannot download it?
-
SpaceDsn and MultiMeter not available
I just installed Logic Express 8. I select new project and then Produce -> Music for Picture. The plug-ins SpaceDsn and MultiMeter aren't available. While they are available in SoundTrack Pro. Any idea? Thanks
-
Accordian widget inconsistent when pushing elements below
I am facing several annoying issues with the accordians on this page: Leather Interior Kits I have searched the forums and in all related posts, no working solution has been offered. In the top Accordian, clicking Leather content - it correctly pushe