External Table Load Size
Hi,
Could you please guide me on how much maximum load(Flat File) size is supported by External Table, in 9i and 10g.
Thanks,
Ashish
I am not sure any size limits exist - what size files are you planning to use ?
HTH
Srini
Similar Messages
-
External Table Load KUP-04037 Error
I was asked to repost this here. This was a follow on question to this thread how to load special characters (diacritics) in the Exort/Import/SQL Loader/External Table Forum.
I've defined an external table and on my one instance running the WE8MSWIN1252 character set everything works fine. On my other instance running AL32UTF8 I get the KUP-04037 error, terminator not found on the field that has "à" (the letter a with a grave accent). Changing it to a standard "a" works avoids the error. Changing the column definition in the external table to nvarchar2 does NOT help.
Any ideas anyone?
Thanks,
Bob SiegelExactly. If you do not specify the CHARACTERSET parameter, the database character set is used to interpret the input file. As the input file is in WE8MSWIN1252, the ORACLE_LOADER driver gets confused trying to interpret single-byte WE8MSWIN1252 codes as multibyte AL32UTF8 codes.
The character set of the input file depends on the way it was created. Even on US Windows, you can create text files in different encodings. Notepad allows you to save the file in ANSI code page (=WE8MSWIN1252 on US Windows), Unicode (=AL16UTF16LE), Unicode big endian (=AL16UTF16), and UTF-8 (=AL32UTF8). The Command Prompt edit.exe editor saves the files in the OEM code page (=US8PC437 on US Windows).
-- Sergiusz -
Need info on using external tables load/write data
Hi All,
We are planning to load conversion/interface data using external tables feature available in the Oracle database.
Also for outbound interfaces, we are planning to use the same feature to write data in the file system.
If you have done similar exercise in any of your projects, please share sample code units. Also, let me know if there
are any cons/limitations in this approach.
Thanks,
BalajiPlease see old threads for similar discussion -- http://forums.oracle.com/forums/search.jspa?threadID=&q=external+AND+tables&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
External Table Loads - Insufficient Privs
Hi...please advise:
Facts:
1. I have 2 schemas: SCHEMA_A and SCHEMA_B
2. I have an oracle directory 'VPP_DIR' created under SCHEMA_A and granted WRITE and READ on the dir to SCHEMA_B.
3. The physical dir on the unix server to which VPP_DIR points has read, write, execute privs for the Oracle user.
4. I have a procedure in SCHEMA_A (CET_PROC) which dynamically creates the external table with parameters passed to it like directory_name, file_name, column_definitions, load_when_clause etc.
5. The CET_PROC also does a grant SELECT on external table to SCHEMA_B once it is created.
6. SCHEMA_B has EXECUTE privs to SCHEMA_A.CET_PROC.
7. SCHEMA_B has a proc (DO_LOAD_PROC) that calls SCHEMA_A.CET_PROC.
At the point where SCHEMA_A.CET_PROC tries to do the EXECUTE_IMMEDIATE command with the create table code, it fails with "ORA-01031: insufficient privileges"
If I execute SCHEMA_A.CET_PROC from within SCHEMA_A with the same parameters it works fine.
If I create CET_PROC inside SCHEMA_B and execute this version from within SCHEMA_B it works fine.
From accross schemas, it fails. Any advice...please?Works for me without CREATE ANY TABLE.
I found it easier to follow the permissions if I replaced SCHEMA_A and SCHEMA_B with OVERLORD and FLUNKY.
/Users/williamr: cat /Volumes/Firewire1/william/testexttable.dat
1,Eat,More,Bananas,TodayAs SYS:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Account: SYS@//centosvm.starbase.local:1521/dev10g.starbase.local
SQL> CREATE USER overlord IDENTIFIED BY overlord
2 DEFAULT TABLESPACE users QUOTA UNLIMITED ON users ACCOUNT UNLOCK;
User created.
SQL> CREATE USER flunky IDENTIFIED BY flunky
2 DEFAULT TABLESPACE users QUOTA UNLIMITED ON users ACCOUNT UNLOCK;
User created.
SQL> GRANT CREATE SESSION, CREATE TABLE, CREATE PROCEDURE TO overlord,flunky;
Grant succeeded.
SQL> GRANT READ,WRITE ON DIRECTORY extdrive TO overlord;
Grant succeeded.As OVERLORD:
Account: OVERLORD@//centosvm.starbase.local:1521/dev10g.starbase.local
SQL> get afiedt.buf
1 CREATE OR REPLACE PROCEDURE build_xt
2 ( p_data OUT SYS_REFCURSOR )
3 AS
4 v_sqlstr VARCHAR2(4000) := q'|
5 CREATE TABLE test_xt
6 ( id NUMBER(8)
7 , col1 VARCHAR2(10)
8 , col2 VARCHAR2(10)
9 , col3 VARCHAR2(10)
10 , col4 VARCHAR2(10) )
11 ORGANIZATION EXTERNAL
12 ( TYPE oracle_loader
13 DEFAULT DIRECTORY extdrive
14 ACCESS PARAMETERS
15 ( RECORDS DELIMITED BY newline
16 BADFILE 'testexttable.bad'
17 DISCARDFILE 'testexttable.dsc'
18 LOGFILE 'testexttable.log'
19 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
20 ( id, col1, col2, col3, col4 ) )
21 LOCATION ('testexttable.dat') )
22 |';
23 BEGIN
24 EXECUTE IMMEDIATE v_sqlstr;
25 OPEN p_data FOR 'SELECT * FROM test_xt';
26* END build_xt;
27
28 .
SQL> @afiedt.buf
Procedure created.
SQL> grant execute on build_xt to flunky;
Grant succeeded.
SQL> -- Prove it works:
SQL> var results refcursor
SQL>
SQL> exec build_xt(:results)
PL/SQL procedure successfully completed.
ID COL1 COL2 COL3 COL4
1 Eat More Bananas Today
1 row selected.
SQL> drop table test_xt purge;
Table dropped.As FLUNKY:
Account: FLUNKY@//centosvm.starbase.local:1521/dev10g.starbase.local
SQL> SELECT * FROM user_sys_privs;
USERNAME PRIVILEGE ADM
FLUNKY CREATE TABLE NO
FLUNKY CREATE SESSION NO
FLUNKY CREATE PROCEDURE NO
3 rows selected.
SQL> var results refcursor
SQL>
SQL> exec overlord.build_xt(:results)
PL/SQL procedure successfully completed.
ID COL1 COL2 COL3 COL4
1 Eat More Bananas Today
1 row selected. -
Hi:
Suppose I have an external table with following two fields
FIELD_A VARCHAR(1),
FIELD_B VARCHAR(1)
Suppose I have a file on which external table is based with following data:
A1
B1
C1
A2
As you can see in the file I have two rows with a FIELD_1 value of A. Can i specify a rule for the external table to only accept the last row, in the case A2?
Thanks,
ThomasNot sure what your actual data looks like but you may be able to do something like the following when you select from the external table. You will need to be able to specify what qualifies as the 'last row';
SQL> create table ext_t (
c1 varchar2(20),
c2 varchar2(20))
organization external
type oracle_loader
default directory my_dir
access parameters
records delimited by newline
fields terminated by ','
missing field values are null
( c1,
c2
location('test.txt')
Table created.
SQL> select * from ext_t
C1 C2
A1 some descrition1
B1 some descrition2
C1 some descrition3
A2 some descrition4
4 rows selected.
SQL> select sc1, c1, c2
from (
select c1, c2,substr(c1,1,1) sc1,
row_number() over (partition by substr(c1,1,1) order by c1 desc) rn
from ext_t)
where rn = 1
SC1 C1 C2
A A2 some descrition4
B B1 some descrition2
C C1 some descrition3
3 rows selected. -
External Table and Direct path load
Hi,
I was just playing with Oracle sql loader and external table features. few things which i ovserved are that data loading through direct path method of sqlloader is much faster and takes much less hard disk space than what external table method takes. here are my stats which i found while data loading:
For Direct Path: -
# OF RECORDS.............TIME...................SOURCE FILE SIZE...................DATAFILE SIZE(.dbf)
478849..........................00:00:43.53...................108,638 KB...................142,088 KB
957697..........................00:01:08.81...................217,365 KB...................258,568 KB
1915393..........................00:02:54.43...................434,729 KB...................509,448 KB
For External Table: -
# OF RECORDS..........TIME...................SOURCE FILE SIZE...................DATAFILE SIZE(.dbf)
478849..........................00:02:51.03...................108,638 KB...................966,408 KB
957697..........................00:08:05.32...................217,365 KB...................1,930,248 KB
1915393..........................00:17:16.31...................434,729 KB...................3,860,488 KB
1915393..........................00:23:17.05...................434,729 KB...................3,927,048 KB
(With PARALLEL)
i used same files for testing and all other conditions are similar also. In my case datafile is autoextendable, hence, as par requirement its size is automatically increased and hard disk space is reduced thus.The issue is that, is this an expected behaviour? and why in case of external tables such a large hard disk space is used when compared to direct path method? Performance of external table load is also very bad when compared to direct path load.
one more thing is that while using external table load with PARALLEL option, ideally, it should take less time. But what i actually get is more than what the time was without PARALLEL option.
In both the cases i am loading data from the same file to the same table (once using direct path and once using external table). before every fresh loading i truncate my internal table into which data was loaded.
any views??
Deep
Message was edited by:
DeepThanx to all for your suggestions.
John, my scripts are as follows:
for external table:
CREATE TABLE LOG_TBL_LOAD
(COL1 CHAR(20), COL2 CHAR(2), COL3 CHAR(20), COL4 CHAR(400),
COL5 CHAR(20), COL6 CHAR(400), COL7 CHAR(20), COL8 CHAR(20),
COL9 CHAR(400), COL10 CHAR(400))
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXT_TAB_DIR
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY WHITESPACE OPTIONALLY ENCLOSED BY '"' MISSING FIELD VALUES ARE NULL
LOCATION ('LOGZ3.DAT')
REJECT LIMIT 10;
for loading i did:
INSERT INTO LOG_TBL (COL1, COL2, COL3, COL4,COL5, COL6,
COL7, COL8, COL9, COL10)
(SELECT COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8,
COL9, COL10 FROM LOG_TBL_load_1);
for direct path my control file is like this:
OPTIONS (
DIRECT = TRUE
LOAD DATA
INFILE 'F:\DATAFILES\LOGZ3.DAT' "str '\n'"
INTO TABLE LOG_TBL
APPEND
FIELDS TERMINATED BY WHITESPACE OPTIONALLY ENCLOSED BY '"'
(COL1 CHAR(20),
COL2 CHAR(2),
COL3 CHAR(20),
COL4 CHAR(400),
COL5 CHAR(20),
COL6 CHAR(400),
COL7 CHAR(20),
COL8 CHAR(20),
COL9 CHAR(400),
COL10 CHAR(400))
and ya, i have used same table in both the situations. after loading once i used to truncate my table, LOG_TBL. i used the same source file, LOGZ3.DAT.
my tablespace USERS is loaclly managed.
thanks -
While loading through External Tables, Japanese characters wrong load
Hi all,
I am loading a text file through External Tables. While loading, japanese characters are loading as junk characters. In text file, the characters are showing correctly.
My spool file
SET ECHO OFF
SET VERIFY OFF
SET Heading OFF
SET LINESIZE 600
SET NEWPAGE NONE
SET PAGESIZE 100
SET feed off
set trimspool on
spool c:\SYS_LOC_LOGIC.txt
select CAR_MODEL_CD||',' || MAKER_CODE||',' || CAR_MODEL_NAME_CD||',' || TYPE_SPECIFY_NO||',' ||
CATEGORY_CLASS_NO||',' || SPECIFICATION||',' || DOOR_NUMBER||',' || RECOGNITION_TYPE||',' ||
TO_CHAR(SALES_START,'YYYY-MM-DD') ||',' || TO_CHAR(SALES_END,'YYYY-MM-DD') ||',' || LOGIC||',' || LOGIC_DESCRIPTION
from Table where rownum < 100;
spool off
My External table load script
CREATE TABLE SYS_LOC_LOGIC
CAR_MODEL_CD NUMBER ,
MAKER_CODE NUMBER,
CAR_MODEL_NAME_CD NUMBER,
TYPE_SPECIFY_NO NUMBER ,
CATEGORY_CLASS_NO NUMBER ,
SPECIFICATION VARCHAR2(300),
DOOR_NUMBER NUMBER,
RECOGNITION_TYPE VARCHAR2(30),
SALES_START DATE ,
SALES_END DATE ,
LOGIC NUMBER,
LOGIC_DESCRIPTION VARCHAR2(100)
ORGANIZATION EXTERNAL
TYPE ORACLE_LOADER
DEFAULT DIRECTORY XMLTEST1
ACCESS PARAMETERS
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
CAR_MODEL_CD,MAKER_CODE,CAR_MODEL_NAME_CD,TYPE_SPECIFY_NO,
CATEGORY_CLASS_NO,SPECIFICATION,DOOR_NUMBER,RECOGNITION_TYPE,
SALES_START date 'yyyy-mm-dd', SALES_END date 'yyyy-mm-dd',
LOGIC, LOGIC_DESCRIPTION
LOCATION ('SYS_LOC_LOGIC.txt')
--location ('products.csv')
REJECT LIMIT UNLIMITED;
How to solve this.
Thanks in advance,
PalJust so I'm clear, user1 connects to the database server and runs the spool to generate a flat file from the database. User2 then uses that flat file to load that data back in to the same database? If the data isn't going anywhere, I assume there is a good reason to jump through all these unload and reload hoops rather than just moving the data from one table to another...
What is the NLS_LANG set in the client's environment when the spool is generated? Note that the NLS_CHARACTERSET is a database setting, not a client setting.
What character set is the text file? Are you certain that the text file is UTF-8 encoded? And not encoded using the operating system's local code page (assuming the operating system is capable of displaying Japanese text)
There is a CHARACTERSET parameter for the external table definition, but that should default to the character set of the database.
Justin -
hi
I use Windows 2008R2 Std. and Oracle 11gR2 RAC
I have successfully mounted ACFS share
ASMCMD> volinfo -a
Diskgroup Name: SHARED
Volume Name: SHARED_ACFS
Volume Device: \\.\asm-shared_acfs-106
State: ENABLED
Size (MB): 8192
Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: C:\SHARED
I have created directory in Oracle mapped onto ACFS share
and granted read,write access to it to my user
Then I created external table with success BUT...
though I see metadata
ADM@proton22> desc t111;
Name Null? Type
NAME VARCHAR2(4000)
VALUE VARCHAR2(4000)
I got error:
ADM@proton22> select * from t111;
select * from t111
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04027: file name check failed: C:\SHARED\EXTTAB\EXT_PARAM.log
How to cope with this?
I granted "full control" privileges to "everyone" user at OS level with no avail.
Edited by: g777 on 2011-06-01 13:47
SORRY, I MOVED TO RAC FORUM.See "Bug 14045247 : KUP-04027 ERROR WHEN QUERY DATA FROM EXTERNAL TABLE ON ACFS" in MOS.
This is actually reported as not being a Bug:
"An ACFS directory on the MS-Windows platform is implemented as a JUNCTION,and is therefore a symbolic link. Therefore, DISABLE_DIRECTORY_LINK_CHECK needs to be used, or a non-ACFS directory."
i.e. when creating the External Table, the DISABLE_DIRECTORY_LINK_CHECK Clause must be used if using the ORACLE_LOADER Access Driver
e.g. CREATE TABLE ...
... ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER ... ACCESS PARAMETERS (RECORDS ... DISABLE_DIRECTORY_LINK_CHECK)
For full syntax see: http://docs.oracle.com/cd/E11882_01/server.112/e22490/et_params.htm
Also note Security Implications mentioned in above documentation:
"Use of this parameter involves security risks because symbolic links can potentially be used to redirect the input/output of the external table load operation" -
External Table Performance and Sizing
Hi,
Can anyone tell me anything about best practices for external tables?
I have an application that writes structured log data to flat files. The size of these files can be configured and when the size limit is reached, they are rolled over. The data itself is queriable via an external table in oracle. Every so often the data is migrated (materialized) to a normal database table so it can be indexed, etc. and to keep the external file size down.
My questions are:
<ol><li> is there an optimum file size for an external table (overall size / number of rows) - by that, I suppose I mean, is there a limit where performance degrades significantly rather than constantly?
</li>
<li>is it better to have one large file mapped to the external table or multiple smaller ones mapped to the same table? e.g. does oracle do some parallel work on multiple smaller files at the same time which might improve things?
</li>
</ol>
If there are any resources discussing these issues, that would be great - or if there is any performance data for external tables in this respect, I would love to see it.
Many thanks,
DaveHi Dave
is there an optimum file size for an external table (overall size / number of rows) - by
that, I suppose I mean, is there a limit where performance degrades significantly rather
than constantly?AFAIK there is no such limit. In other words, access time is proportional to the size (number of rows).
is it better to have one large file mapped to the external table or multiple smaller ones
mapped to the same table? e.g. does oracle do some parallel work on multiple smaller
files at the same time which might improve things?The DOP of a parallel query on an external table is limited by the number of files. Therefore, to use parallel processing, more than one file is needed.
HTH
Chris Antognini
Troubleshooting Oracle Performance, Apress 2008
http://top.antognini.ch -
Use of External tables to load XML data.
Hi,
I have used external table definitions to load various XML files to the database, usually splitting the XML into seperate records - 1 per major element tag, and using PL/SQL to parse out a primary key to store in a relational table with all of the XML relevant to that primary key value stored as an XMLTYPE coumn in a row of the table. This has worked fine for XML with a single major entity (element tag) .
However, I now have an XML file that contains two "major" elements (both children of the root) that I would like to split out and store in seperate tables.
The XML file is of the following basic format:-
<drugs>
<drug>drug 1...</drug>
<drug>drug 2...</drug>
<partners>
<partner>partner 1</partner>
<partner>partner 2</partner>
</partners>
</drugs>
The problem is there are around 18000 elements of the first type, followed by several thousand of the 2nd type. I can create two seperate external tables - one for each element type, but how do I get the external table for the 2nd to ignore all the elements of the first type? My external table definition is :-
CREATE TABLE DRUGBANK_OWNER.DRUGBANK_PARTNERS_XML_EXTERNAL
DRUGBANK_XML CLOB
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY DRUGBANK_DIR
ACCESS PARAMETERS
( records delimited by "</partner>" SKIP 100000
characterset al32utf8
badfile extlogs:'drugbank_partners_xml.bad'
logfile extlogs:'drugbank_partners_xml.log'
discardfile extlogs:'drugbank_partners_xml.dis'
READSIZE 52428800
fields
drugbank_xml CHAR(50000000) terminated by '</partners>'
LOCATION (DRUGBANK_DIR:'drugbank.xml')
REJECT LIMIT UNLIMITED
PARALLEL ( DEGREE 8 INSTANCES 1 )
NOMONITORING;
The problem is that before the first <partners> element the 1800 or so <drugs> elements cause a data cartrdige error:-
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04020: found record longer than buffer size supported, 52428800
This happens regardless of the value of the SKIP or the size of the drugbank_xml field.
I have tried using an OR on the "records delimited by" access parameter, to 'delimit by "</partner>" OR "</drug>"', with the intention of filtering out the <drug> elements but this leads to a syntax error.
Anyone ever tried anything similar and got it to work?
Any other suggestions?
Thanks,
Sid.No, the content inside quotes is spanned across multiple lines....there are line breaks after every html tag.
"What's the error message you are getting?"
Iam not getting any error while selecting from external table , but I am getting tose rows in BAD file and log file has the following entries
KUP-04021: field formatting error for field TKBS_DSCN
KUP-04036: second enclosing delimiter not found
Message was edited by:
user627610 -
SQL*LOADER(8I) VARIABLE SIZE FIELD를 여러 TABLE에 LOAD하기 (FILLER)
제품 : ORACLE SERVER
작성날짜 : 2004-10-29
==================================================================
SQL*LOADER(8I) VARIABLE SIZE FIELD를 여러 TABLE에 LOAD하기 (FILLER)
==================================================================
PURPOSE
SQL*LOADER 에서 variable length record와 variable size field를 가진 data
file 을 여러 table에 load하는 방법을 소개하고자 한다.
( 8i new feature인 FILLER 절 사용)
Explanation
SQL*LOADER SYNTAX
여러 table에 load하고자 할때에는 control file에 아래와 같이 하면 된다.
INTO TABLE emp
INTO TABLE emp1
fixed length field을 가진 data file을 여러 table에 같은 data을 load하고자
한다면 아래와 같다.
INTO TABLE emp
(empno POSITION(1:4) INTEGER EXTERNAL,
INTO TABLE emp1
(empno POSITION(1:4) INTEGER EXTERNAL,
위와 같이 양쪽 table의 empno field에 각각의 load할 data로부터 1-4까지를
load 할수 있다. 그러나 field의 길이가 가변적이라면 위와 같이 POSITION 절을
각 field에 사용할 수 없다.
Example
예제 1>
create table one (
field_1 varchar2(20),
field_2 varchar2(20),
empno varchar(10) );
create table two (
field_3 varchar2(20),
empno varchar(10) );
load할 record가 comma로 나누어지며 길이가 가변적이라고 가정하자.
<< data.txt >> - load할 data file
"this is field 1","this is field 2",12345678,"this is field 4"
<< test.ctl >> - control file
load data infile 'data.txt'
discardfile 'discard.txt'
into table one
replace
fields terminated by ","
optionally enclosed by '"' (
field_1,
field_2,
empno )
into table two
replace
fields terminated by ","
optionally enclosed by '"' (
field_3,
dummy1 filler position(1),
dummy2 filler,
empno )
dummy1 field는 filler로 선언되었다. filler로 선언하면 table에 load하지 않는다.
two라는 table에는 dummy1이라는 field는 없으며 position(1)은 current record의
처음부터 시작해서 첫번째 field을 dummy1 filler item에 load한다는 것을 말한다.
그리고 두번째 field을 dummy2 filler item에 load한다. 세번째 field인, one이라는
table에 load되었던 employee number는 two라는 table에도 load되는 것이다,
<< 실행 >>
$sqlldr scott/tiger control=test.ctl data=data.txt log=test.log bindsize=300000
$sqlplus scott/tiger
SQL> select * from one;
FIELD_1 FIELD_2 EMPNO
this is field 1 this is field 2 12345678
SQL> select * from two;
FIELD_3 EMPNO
this is field 4 12345678
예제 2>
create table testA (c1 number, c2 varchar2(10), c3 varchar2(10));
<< data1.txt >> - load할 data file
7782,SALES,CLARK
7839,MKTG,MILLER
7934,DEV,JONES
<< test1.ctl >>
LOAD DATA
INFILE 'data1.txt'
INTO TABLE testA
REPLACE
FIELDS TERMINATED BY ","
c1 INTEGER EXTERNAL,
c2 FILLER CHAR,
c3 CHAR
<< 실행 >>
$ sqlldr scott/tiger control=test1.ctl data=data1.txt log=test1.log
$ sqlplus scott/tiger
SQL> select * from testA;
C1 C2 C3
7782 CLARK
7839 MILLER
7934 JONES
Reference Documents
<Note:74719.1> -
How to resolve error when Loading External Table?
Iâm getting the following errors when attempting to load External Table -- I've changed the file extension from .csv to .txt to resolve ORA-29913 but error re-occurred. See syntax of External Table below.
Thanks,
Carol-Ann
SQL> desc OWB_COUNTY_TIMEZONE_EXT;
Name Null? Type
STATE_CODE VARCHAR2(2)
COUNTY_CODE VARCHAR2(3)
TIME_ZONE VARCHAR2(1)
SQL> select count(*) from OWB_COUNTY_TIMEZONE_EXT;
select count(*) from OWB_COUNTY_TIMEZONE_EXT
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open log file owb_county_timezone.log
OS error Permission denied
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1
++++++++++++++++++++++++++++++++++++++++++++++
Synatx of External Table:
WHENEVER SQLERROR EXIT FAILURE;
CREATE TABLE "OWB_COUNTY_TIMEZONE_EXT"
"STATE_CODE" VARCHAR2(2),
"COUNTY_CODE" VARCHAR2(3),
"TIME_ZONE" VARCHAR2(1))
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY AIMQRYD_AIMP_LOC_FF_MODULE_LOC
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
CHARACTERSET WE8MSWIN1252
STRING SIZES ARE IN BYTES
BADFILE 'owb_county_timezone'
DISCARDFILE 'owb_county_timezone'
LOGFILE 'owb_county_timezone'
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"' AND '"'
NOTRIM
MISSING FIELD VALUES ARE NULL
"STATE_CODE" ,
"COUNTY_CODE" ,
"TIME_ZONE"
LOCATION (
AIMQRYD_AIMP_LOC_FF_MODULE_LOC:'county_timezone_comma.txt'
REJECT LIMIT UNLIMITED
NOPARALLELHi Carol-Ann,
The key issue here is
"KUP-04063: unable to open log file owb_county_timezone.log
OS error Permission denied".
Looks like you don't have sufficient system priviliges on Unix.
This is wat AskTom mentions about it:
"the directory must exist on the SERVER.
the concerned user is the "oracle software owner" as far as the OS is concerned.
oracle must have read write access to this directory
and the directory must exist on the SERVER (database server) itself."
Hope this helps.
Cheers, Patrick -
How change NLS_NUMERIC_CHARACTERS parameter for load external table
Hi,
I use this version:
OWB 11gR2
Database 11gR2
Parameter NLS_NUMERIC_CHARACTERS Database ., Instance ,.
When I created database with wizard and in this moment I don't set spanish language, later I changed this parameters in instance parameters.
Now I want load data from a file to external table, but I've an error when I try load data with decimal point.
why does it use the database parameter instead of instance parameter?
Is possible to change this parameter?
Cheers
MarisolAt this moment , this is not possible . You can see metalink note ID 268906.1.
It says:
Currently, external tables always use the setting of NLS_NUMERIC_CHARACTERS
+at the database level.+
Cheers
Marisol -
External table.How to load numbers (decimal and scientific notation format)
Hi all, I need to load inside an external table records that contain 7 fields. The last field is called AMOUNT and it's represented in some records with the decimal format, in others records with the scientific notation format as, for example, below:
CY001_STATU;2009;Jan;11220020GR;'03900;CYZ900;-9,99999999839929e-03
CY001_STATU;2009;Jan;11200100;'60800;CYZ900;41380,77
The External table's script is the following:
CREATE TABLE HYP_DATA
COUNTRY VARCHAR2(50 BYTE),
YEAR VARCHAR2(20 BYTE),
PERIOD VARCHAR2(20 BYTE),
ACCOUNT VARCHAR2(50 BYTE),
DEPT VARCHAR2(20 BYTE),
ACTIVITY_LOC VARCHAR2(20 BYTE),
AMOUNT VARCHAR2(50 BYTE)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY HYP_DATA_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
BADFILE 'HYP_BAD_DIR':'HYP_LOAD.bad'
DISCARDFILE 'HYP_DISCARD_DIR':'HYP_LOAD.dsc'
LOGFILE 'HYP_LOG_DIR':'HYP_LOAD.log'
SKIP 0
FIELDS TERMINATED BY ";"
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
"COUNTRY" Char,
"YEAR" Char,
"PERIOD" Char,
"ACCOUNT" Char,
"DEPT" Char,
"ACTIVITY_LOC" Char,
"AMOUNT" Char
LOCATION (HYP_DATA_DIR:'Total.txt')
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;
If, for the field AMOUNT I use the datatype VARCHAR (as above), the table is loaded but I have some records rejected, and all these records contain the last field AMOUNT with the scientific notation as:
CY001_STATU;2009;Jan;11220020GR;'03900;CYZ900;-9,99999999839929e-03
CY001_STATU;2009;Feb;11220020GR;'03900;CYZ900;-9,99999999839929e-03
CY001_STATU;2009;Mar;11220020GR;'03900;CYZ900;-9,99999999839929e-03
CY001_STATU;2009;Dec;11220020GR;'03900;CYZ900;-9,99999999839929e-03
All the others records with a decimal AMOUNT are loaded correctly.
So, my problem is that I NEED to load all the records (with the decimal and the scientific notation format) together (without records rejected), but I don't know which datatype I have to use for the AMOUNT field....
Anybody has any idea ???
Any help would be appreciated
Thanks in advance
Alex@OP,
What version of Oracle are you using?
Just cut'n'paste of you script and example woked FINE for me.
however my quation is... An external table will LOAD all data or none at all. How are you validating/concluding that...
I have some records rejected, and all these records contain the last field AMOUNT with the scientific notation
select * from v$version where rownum <2;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
select * from mydata;
CY001_STATU 2009 Jan 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Feb 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Jan 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Jan 11200100 '60800 CYZ900 41380,77
CY001_STATU 2009 Mar 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Dec 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Jan 11220020GR '03900 CYZ900 -9,99999999839929e-03
CY001_STATU 2009 Jan 11200100 '60800 CYZ900 41380,77MYDATA table script is...
drop table mydata;
CREATE TABLE mydata
COUNTRY VARCHAR2(50 BYTE),
YEAR VARCHAR2(20 BYTE),
PERIOD VARCHAR2(20 BYTE),
ACCOUNT VARCHAR2(50 BYTE),
DEPT VARCHAR2(20 BYTE),
ACTIVITY_LOC VARCHAR2(20 BYTE),
AMOUNT VARCHAR2(50 BYTE)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY IN_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
BADFILE 'IN_DIR':'HYP_LOAD.bad'
DISCARDFILE 'IN_DIR':'HYP_LOAD.dsc'
LOGFILE 'IN_DIR':'HYP_LOAD.log'
SKIP 0
FIELDS TERMINATED BY ";"
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
"COUNTRY" Char,
"YEAR" Char,
"PERIOD" Char,
"ACCOUNT" Char,
"DEPT" Char,
"ACTIVITY_LOC" Char,
"AMOUNT" Char
LOCATION (IN_DIR:'total.txt')
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;vr,
Sudhakar B. -
External table: How to load data from a fixed format UTF8 external file
Hi Experts,
I am trying to read data from a fixed format UTF8 external file in to a external table. The file has non-ascii characters, and the presence of the non-ascii characters causes the data to be positioned incorrectly in the external table.
The following is the content's of the file:
20100423094529000000I1 ABÄCDE 1 000004
20100423094529000000I2 OMS Crew 2 2 000004
20100423094529000000I3 OMS Crew 3 3 000004
20100423094529000000I4 OMS Crew 4 4 000004
20100423094529000000I5 OMS Crew 5 5 000004
20100423094529000000I6 OMS Crew 6 6 000004
20100423094529000000I7 Mobile Crew 7 7 000004
20100423094529000000I8 Mobile Crew 8 8 000004
The structure of the data is as follows:
Name Type Start End Length
UPDATE_DTTM CHAR 1 20 20
CHANGE_TYPE_CD CHAR 21 21 1
CREW_CD CHAR 22 37 16
CREW_DESCR CHAR 38 97 60
CREW_ID CHAR 98 113 16
UDF1_CD CHAR 114 143 30
UDF1_DESCR CHAR 144 203 60
UDF2_CD CHAR 204 233 30
DATA_SOURCE_IND CHAR 294 299 6
UDF2_DESCR CHAR 234 293 60
I create the external table as follows:
CREATE TABLE "D_CREW_EXT"
"UPDATE_DTTM" CHAR(20 BYTE),
"CHANGE_TYPE_CD" CHAR(1 BYTE),
"CREW_CD" CHAR(16 BYTE),
"CREW_DESCR" CHAR(60 BYTE),
"CREW_ID" CHAR(16 BYTE),
"UDF1_CD" CHAR(30 BYTE),
"UDF1_DESCR" CHAR(60 BYTE),
"UDF2_CD" CHAR(30 BYTE),
"DATA_SOURCE_IND" CHAR(6 BYTE),
"UDF2_DESCR" CHAR(60 BYTE)
ORGANIZATION EXTERNAL
TYPE ORACLE_LOADER DEFAULT DIRECTORY "TMP"
ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE
CHARACTERSET UTF8
STRING SIZES ARE IN BYTES
NOBADFILE NODISCARDFILE NOLOGFILE FIELDS NOTRIM
( "UPDATE_DTTM" POSITION (1:20) CHAR(20),
"CHANGE_TYPE_CD" POSITION (21:21) CHAR(1),
"CREW_CD" POSITION (22:37) CHAR(16),
"CREW_DESCR" POSITION (38:97) CHAR(60),
"CREW_ID" POSITION (98:113) CHAR(16),
"UDF1_CD" POSITION (114:143) CHAR(30),
"UDF1_DESCR" POSITION (144:203) CHAR(60),
"UDF2_CD" POSITION (204:233) CHAR(30),
"DATA_SOURCE_IND" POSITION (294:299) CHAR(6),
"UDF2_DESCR" POSITION (234:293) CHAR(60) )
) LOCATION ( 'D_CREW_EXT.DAT' )
REJECT LIMIT UNLIMITED;
Check the result in database:
select * from D_CREW_EXT;
I found the first row is incorrect. For each non-ascii character,the fields to the right of the non-ascii character are off by 1 character,meaning that the data is moved 1 character to the right.
Then I tried to use the option STRING SIZES ARE IN CHARACTERS instead of STRING SIZES ARE IN BYTES, it doesn't work either.
The database version is 11.1.0.6.
Edited by: yuan on May 21, 2010 2:43 AMHi,
I changed the BYTE in the create table part to CHAR, it still doesn't work. The result is the same. I think the problem is in ACCESS PARAMETERS.
Any other suggestion?
Maybe you are looking for
-
Safari on my IPad 4 will not open. It will come up but is blank and will not let me type a search or navigate anywhere.
-
My Photoshop CS5 is not reading my NEF files from my Nikon D610.
I'm also not able to update my Camera Raw. I have a MAC and I'm using Mavericks OS.
-
JSF 2.0 - ADF Direction
Can any Oracle people provide information about the direction/roadmap/plan for ADF and JSF 2 support? Actually ADF supports just JSF 1.2, right? Thks in advance
-
Find the correct data from series of messages
I have the RFC -> XI -> SOAP scenario; In this scenario, RFC will send some notes of service for the WEB service via SOAP. When I access the monitor of messages of the XI, I got series of messages. How I should find the correct messages out of this b
-
How to run javascript from Item Button?
How to make an Item Button to run a javascript?