External table varchar2(4000 byte) field storing 255 bytes only

Hi all,
wondering if someone can tell me what im missing here.
I have an external table with a column defined as varchar2(4000 byte). the file has a row in it with 255 characters (all number 2's for simplicity). when i query the table all is well. if i add 1 more 2 to the string (256 chars) it fails. im sure its something stupidly simple but what am i missing? shouldnt it query fine until 4000 chars?
thanks,
Dave

I ran your testcase, thanks for that.
Make sure to read the SQL and PL/SQL FAQ as well (the first sticky thread on this forum), it explains how to post formatted code and lots of other stuff.
Anyway, .log file gave me:
LOG file opened at 07/18/11 20:05:33
Field Definitions for table DAVEP2
  Record format DELIMITED, delimited by 0A
  Data in file has same endianness as the platform
  Rows with all null fields are accepted
  Fields in Data Source:
    MY_STRING                       CHAR (255)
      Terminated by ","
      Enclosed by """ and """
      Trim whitespace same as SQL LoaderSo, what happens if you create the table as follows:
CREATE TABLE davep2 (
   my_string         VARCHAR2(4000 BYTE)         NULL
ORGANIZATION EXTERNAL
   (  TYPE ORACLE_LOADER
      DEFAULT DIRECTORY FILE_TEST
      ACCESS PARAMETERS
        ( RECORDS DELIMITED BY  0x'0A'  BADFILE FILE_TEST:'davep2.bad'
  LOGFILE FILE_TEST:'davep2.log'
  FIELDS TERMINATED BY ',' optionally enclosed by '"' and '"'
  missing field values are null
           my_string char(4000)
      LOCATION (FILE_TEST:'DaveP.csv')) REJECT LIMIT 0 NOPARALLEL

Similar Messages

  • External table is not accepting more than 255 Characters

    Hi,
    I'm new to External table.. Somehow External table is not accepting more than 255 Characters even though I'm using VARCHAR2(4000 BYTES).. Can you please help me..
    CREATE TABLE DM_CL_ExterTbl_Project
    project_name VARCHAR2(80 BYTE),
    project_id VARCHAR2(20 BYTE),
    work_type VARCHAR2(100 BYTE),
    work_description               VARCHAR2(4000 BYTE)
    ORGANIZATION EXTERNAL
    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY UTL_FILE_DIR
    ACCESS PARAMETERS
    records delimited by '#(_@p9#' SKIP 1
    logfile 'pp.log'
    badfile 'pp1.bad'
    fields terminated by ','
    OPTIONALLY ENCLOSED BY '"' and '"'LDRTRIM
    missing field values are null
    REJECT ROWS WITH ALL NULL FIELDS
    project_name,
    project_id,
    work_type,
    work_description
    LOCATION (UTL_FILE_DIR:'TOG_Data_Extract.csv')
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    NOMONITORING
    Thanks in advance..
    ~~Manju

    I got the asnwer.. In the filed list I have to specify the datatype & it's Size otherwise by default it will take CHAR(255)..
    work_type CHAR(4000) solved the problem..!!

  • Type "CURR - Currency field, stored as DEC" only contains 2 decimal digits?

    We have a KF which has a data type of "CURR - Currency field, stored as DEC" mapped to a R3 field with 3 decimal digits, but "CURR - Currency field, stored as DEC" data type in BW/BI takes up only 2 decimal digits that the last digit of R3 field value is ignored.
    Any idea for a workaround?
    Thanks!

    Go to RSA1 - and go to the key figure info object and change the properties.
    Ravi Thothadri

  • Purchase Requisition number : In which table the PR currency field stored?

    When we raise a Purchase requisition nr : t codes ME53N / ME21N
    the PR nr, item nr, PR doc type, PR group, plant etc are stored in EBAN table.
    PR amount can be calculated based on MENGE * PREIS / PEINH from the table EBAN.
    Where will be the PR amount currencies like 'EUR' or 'GBP' stored?
    How can we relate those Table with EBAN/EBKN?
    Please let me know the exact TABLE for this.

    Hi,
    currency keys will be stored in
    TCURC table..  ( all currency keys will be available )
    eban - waers is the field where the currency key value stored
    regards,
    Venkatesh

  • External Table loading?

    Hi:
    Suppose I have an external table with following two fields
    FIELD_A VARCHAR(1),
    FIELD_B VARCHAR(1)
    Suppose I have a file on which external table is based with following data:
    A1
    B1
    C1
    A2
    As you can see in the file I have two rows with a FIELD_1 value of A. Can i specify a rule for the external table to only accept the last row, in the case A2?
    Thanks,
    Thomas

    Not sure what your actual data looks like but you may be able to do something like the following when you select from the external table. You will need to be able to specify what qualifies as the 'last row';
    SQL> create table ext_t (
       c1 varchar2(20),
       c2 varchar2(20))
    organization external
       type oracle_loader
       default directory my_dir
       access parameters
         records delimited by newline
         fields terminated by ','
         missing field values are null
         ( c1,
           c2
       location('test.txt')
    Table created.
    SQL> select * from ext_t
    C1                   C2                 
    A1                   some descrition1   
    B1                   some descrition2   
    C1                   some descrition3   
    A2                   some descrition4   
    4 rows selected.
    SQL> select sc1, c1, c2
    from (
       select c1, c2,substr(c1,1,1) sc1,
       row_number() over (partition by substr(c1,1,1) order by c1 desc) rn
       from ext_t)
    where rn = 1
    SC1  C1                   C2                 
    A    A2                   some descrition4   
    B    B1                   some descrition2   
    C    C1                   some descrition3   
    3 rows selected.

  • 11g hybrid authentication / authorization: WLS plus external table

    I've implemented external table authentication / authorization in 11g. Now I'd like to add a twist.
    I have an external table containing users B, C, and D. That external table contains all of the columns I need for authentication (including a clear text password) and for authorization (roles, log level, a dynamic table name, and so forth). I have authentication in one initialization block, authorization in another. Everything works fine. I can log in as B, C, or D and see exactly what I'm supposed to see, based on the ROLES.
    The clear text passwords are generally not a problem, because this is a training instance and almost all of the passwords are the same. However, I want to add a user whose password should not be held in clear text. For that reason, I'd like to add that user into WLS. I've done that, and I'm able to log in to OBIEE. After confirming that I could log in to OBIEE with user A from the WLS, I added User A to the external table, left its password field blank, and filled in the other columns (roles, loglevel, etc...) that I need to assign into session variables.
    Here's the problem: the authorization init block properly assigns ALL session variables for users B, C, and D. It assigns all session varaibles EXCEPT the ROLES variable for user A. I've confirmed this by creating an Answers analysis that shows me the values of the session variables. The ROLES session variable for user A shows "authenticated-role;BIConsumer;AuthenticatedUser". For all other users (those who are authenticated using the clear text passwords in the external table) the ROLES variable is populated correctly, based on the values in the ROLES column in the external table. In short, the authorization init block is properly assigning the ROLES session variable only for those users that were authenticated using the authentication init block, but is assigning all other session variables correctly for all users, even the one in WLS.
    Here's my authentication init block code:
    select bi_user
    from bi_auth_ldap
    where bi_user = ':USER'
    and bi_user_pwd = ':PASSWORD'
    Here's the authorization init block code:
    select roles, bi_user_name, to_number(loglevel,0), channel_tbl
    from bi_auth_ldap
    where bi_user = ':USER'
    (returned results are assigned into ROLES, DISPLAYNAME, LOGLEVEL, and CHANNEL_TBL session variables, respectively)
    It feels like the ROLES session variable is populated in conjuction with the user logging on and being authenticated via WLS, and that the initialization block isn't able to overwrite that variable. Can an OBIEE developer confirm that for us, please? Once set in WLS, is it not possible to overwrite the ROLES session variable with SQL from an initialization block? If it IS possible, can you post some code that will accomplish it?
    Thanks!

    It occurs to me that Oracle's support model is a fantastic way to make money. Let's see, I wonder if I could become a billionaire doing this:
    Create some software. Sell that software. Then, charge customers several thousand MORE dollars, year after year, plus about $60 per bug, so that they have the right to report MY bugs to me. Yeah, that's the ticket - people PAYING for the right to report bugs to me. Oh, and if more than one person reports the same bug, I get to keep ALL of the money from ALL of them.
    Let's summarize, make sure I haven't missed something: You buy my software, you PAY ME additionally to report MY bugs to me, I don't necessarily have to fix the bugs (but I keep your money whether I fix it or not), and I can collect multiple times from different people who report the same bug.
    Sweeeeeeet.........
    Billionaire Acres, here I come!

  • Need help in parsing a VARCHAR2(4000 BYTES) field

    Hi Guys,
    Let me give the DB information first:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    My problem is: The frontend of the application sends a large string into the VARCHAR2 (4000 BYTES) field. The string it sends is somewhat like this -
    <strong>Crit:</strong> Some text. <br><strong>Distinguished</strong> (points 3):Some comment text1. <blockquote> </blockquote><strong>Crit:</strong> Some other text.<br><strong>Distinguished</strong> (points 3):Some comment text2. <blockquote> </blockquote><strong>Crit:</strong> Some more text. <br><strong>Distinguished</strong> (points 3):Some comment text3. <blockquote> </blockquote><strong>Overall comments:</strong><br> Final text!!I want to parse the text and put into separate columns. Number of Crit: can be more than 3; its 3 up there. But the basic structure is same. What is the best possible way of parsing this in PL/SQL code? I want something like
    Table 1
    Crit                       Points           Comment
    Some text                3        Some comment text1.
    Some other text        3        Some comment text2.
    Some more text        3        Some comment text3.
    Table 2
    Overall comments
    Final text!!Please let me know, if you need further information.
    Thanks.
    Edited by: 794684 on Sep 14, 2010 4:15 AM
    Edited by: 794684 on Sep 14, 2010 4:38 AM
    Edited by: 794684 on Sep 14, 2010 4:53 AM
    Edited by: 794684 on Sep 14, 2010 6:42 AM

    Welcome to the forum.
    Looks like noformat tags are not working.Please use the {noformat}{noformat} tag if you want to post formatted examples/code.
    For example, when you type:
    {noformat}select *
    from dual;
    {noformat}
    it will appear as:select *
    from dual;
    when you post it on this forum...
    The FAQ will explain the other options you have: http://forums.oracle.com/forums/help.jspa                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • ORA-01461 Error when mapping table with multiple varchar2(4000) fields

    (Note: I think this was an earlier problem, supposed fixed in 11.0, but we are experiencing in 11.7)
    If I map an Oracle 9i table with multiple varchar2(4000) columns, targeting another Oracle 9i database, I get the ORA-01461 error (Can't bind a LONG value only for insert into a LONG).
    I have tried changing the target columns to varchar2(1000), as suggested as a workaround in earlier versions, all to no avail.
    I can have just one varchar2(4000) map correctly and execute flawlessly - the problem occurs when I add a second one.
    I have tried making the target column a LONG, but that does not solve the problem.
    Then, I made the target database SQL Server, and it had no problem at all, so the issue seems to be Oracle-related.

    Hi Jon,
    Thanks for the feedback. I'm unable to reproduce the problem you describe at the moment - if I try to migrate a TEXT(5), OMWB creates a VARCHAR(5) and the data migrates correctly!! However, I note from you description that even though the problematic source column datatype is TEXT(5), you mention that there are actually 20 lines of text in this field (and not 5 variable length characters as the definition might suggest).
    Having read through some of the MySQL reference guide I note that, in certain circumstances, MySQL actually changes the column datatype specified either at table creation time or when interfacing with other databases ( ref 14.2.5.1 Silent Column Specification Changes and 12.7 Using Column Types from Other Database Engines in the MySQL reference guide). Since your TEXT(5) actually contains 20 lines of text, MySQL (database or JDBC driver .... or both) may be trying to automatically map the specified datatype of the column to a datatype more appropriate to storing 20 lines of text.... that is, to a LONG value in this case. Then, when Oracle is presented with this LONG value to store in a VARCHAR(5) field, it throws the ORA-01461 error. I need to investigate this further, but this may be the case - its the first time I've see this problem encountered.
    To workaround this, you could change the datatype of the column to a LONG from within the Oracle Model before migrating. Any application code that accesses this column and expects a TEXT(5) value may need to be adjusted to cope with a LONG value. Is this a viable workaround for you?
    I will investigate further and notiofy you of any details I uncover. We will need to track this issue for possible inclusion in future development plans.
    I hope this helps,
    Regards,
    Tom.

  • Modifying field lenght for an  external table via OWB

    Hi folks,
    After, I would like to thanks everyone that helps to improve the level of quality of this forum with questions and answers!
    I am having a problem with flat files and external tables:
    I can't specify the field length in the ORACLE_LOADER parameters via OWB, so the driver consider that all my fields are CHAR(255). Concerning the online documentation for Oracle Utilities Part. no. A96652-01 on page 12-21, it's write that the lack of specification about the datatype or length of a field, the driver will assume the fields name and order of the fields defined on the external table fields and they will be CHAR(255). My problem is that we need to use a field bigger than CHAR(255), so I made this external table for example about our problem:
    CREATE TABLE "TESTE"
    "NOME" VARCHAR2(300),
    "SOBRENOME" VARCHAR2(300))
    ORGANIZATION EXTERNAL (
    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY TARGET_LOC_SRC_FILES_LOC
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    CHARACTERSET WE8MSWIN1252
    STRING SIZES ARE IN BYTES
    NOBADFILE
    NODISCARDFILE
    NOLOGFILE
    FIELDS
    TERMINATED BY ';'
    NOTRIM
    "NOME" ,
    "SOBRENOME"
    ) LOCATION (
    TARGET_LOC_SRC_FILES_LOC:'Partners.txt'
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    When we put by hand the field length in this script, it works fine!
    Is there a way to put the length of the fields in the FIELDS clause of the SQL*Loader control file using OWB?

    Carlos,
    Have you tried the following:
    - Update the data types in the sampled flat file (i.e. go into the field definitions and edit the length).
    - Go into the mapping and inbound reconcile the table (right-mouse click the file operator and select inbound reconcile).
    - Regenerate, and review the result.
    Thanks,
    Mark.

  • Creating external table - from a file with multiple field separators

    I need to create an external table from a flat file containing multiple field separators (",", ";" and "|").
    Is there any way to specifiy this in the CREATE TABLE (external) statement?
    FIELDS TERMINATED BY "," -- Somehow list more than just comma here?
    We receive the file from a vendor every week. I am trying to set up a process for some non-technical users, and I want to keep this process transparent to them and not require them to load the data into Oracle.
    I'd appreciate your help!

    scott@ORA92> CREATE OR REPLACE DIRECTORY my_dir AS 'c:\oracle'
      2  /
    Directory created.
    scott@ORA92> CREATE TABLE external_table
      2    (COL1 NUMBER,
      3       COL2 VARCHAR2(6),
      4       COL3 VARCHAR2(6),
      5       COL4 VARCHAR2(6),
      6       COL5 VARCHAR2(6))
      7  ORGANIZATION external
      8    (TYPE oracle_loader
      9       DEFAULT DIRECTORY my_dir
    10    ACCESS PARAMETERS
    11    (FIELDS
    12         (COL1 CHAR(255)
    13            TERMINATED BY "|",
    14          COL2 CHAR(255)
    15            TERMINATED BY ",",
    16          COL3 CHAR(255)
    17            TERMINATED BY ";",
    18          COL4 CHAR(255)
    19            TERMINATED BY ",",
    20          COL5 CHAR(255)
    21            TERMINATED BY ","))
    22    location ('flat_file.txt'))
    23  /
    Table created.
    scott@ORA92> select * from external_table
      2  /
          COL1 COL2   COL3   COL4   COL5
             1 Field1 Field2 Field3 Field4
             2 Field1 Field2 Field3 Field4
    scott@ORA92>

  • Forms - Input field larger than VARCHAR2(4000)?

    Maybe I'm confused, but it doesn;t seem possible to create a form with a field larger than VARCHAR2(4000) with the builder - if you choose CLOB or LONG for a field type in the Database Object you want to use with your Form, the form builder doesn't display that field.
    So how do I make an input field that will accept a large amount of character data?

    It's a limitation, this is not been fixed until release 10.2.0.x.
    quota readme.txt:
    * Binding more than 8000 bytes data to a table containing LONG
    columns in one call of PreparedStatement.executeUpdate() may
    result in an ORA-22295 error.

  • When will external table field_list in access parameters support varchar2?

    Hello all,
    Currently I'm working on a CSV error check/upload/download utility (in pl/sql) using external tables. However, within the access parameters section, when I tried to use varchar2 in the field list to map to data fields in csv, oracle (10g) complaint that I cannot use varchar2.
    Therefore I am forced to use char (Feuerstein explictly told us not to use varchar in PL/SQL Programming) to map to the data. However, since some fields in the DB are actually varchar2(4000), and the max length supported by char type in sqlplus is 2000, I'm creating this ridiculous machine:
    - data in csv can hava a maxlength of 4000 characters
    - staging table to which csv data will go to is varchar2(4000)
    - field list in external table creates a bottleneck of char(2000)
    As a result, I'm betting on the user never will enter any string longer than 2000 character into csv, even though the DB supports it. However, this would be way too fragile to go into production.
    Does anybody know if Oracle 11g external tables support varchar2, or are there any workarounds (besides using sqlldr)? Many thanks.

    Hi, my external table spec is something like this:
    create table x_table (
    field1           varchar2(4000),
    field2           varchar2(4000),
    field3           varchar2(4000))
    organization external (type oracle_loader default directory ext_tab_dir
              access parameters (
              records delimited by newline
              badfile ext_tab_dir:'importcsv.bad'
              discardfile ext_tab_dir:'discarded.txt'
              logfile ext_tab_dir:'importcsv.log'
              skip 1
              fields terminated by '|' lrtrim
              missing field values are null
              reject rows with all null fields
              (field1 char(2000),
              field2           char(2000),
              field3           char(2000))
              location ('test_xtable.csv'))
              reject limit unlimited;
    The portion below the phrase "reject rows with all null fields" is what I meant by "field mapping", since I use it to match the csv structure. The order within the fields specification above the "organization external" can be rearranged any way I like, as long as the field name has a matching one in the "field mapping". Right now, I can only use char(2000) or varchar(2000) in the "field mapping" area, but not varchar2. Any ideas?

  • Difference between varchar2(4000 byte) & varchar2(4000 char

    Hi,
    My existing database NLS parameters as follows
    CHARACTER SET US7ASCII
    NATIONAL CHARACTER SET AL16UTF16
    created a test database to support globalization, I changed as follows
    CHARACTER SET UTF8
    NATIONAL CHARACTER SET UTF8
    some of the table column datatypes are varchar2(4000)
    I would like to know what is difference between VARCHAR2(4000 BYTE) and VARCHAR2(4000 CHAR).
    Thanks

    Indeed, VARCHAR2(x BYTE) means that the column will hold as much characters as will fit into x bytes. Depending on the character set and particular characters this may be x or less characters.
    For example, a VARCHAR2(20 BYTE) column in an AL32UTF8 database can hold 20 characters from the ASCII range, 10 Latin letters with umlaut, 10 Cyryllic, 10 Hebrew, or 10 Arabic letters (2 bytes per character), or 6 Chinese, Japanese, Korean, or Devanagari (Indic) characters. Or a mixture of these characters of any total length up to 20 bytes.
    VARCHAR2(x CHAR) means that the column will hold x characters but not more than can fit into 4000 bytes. Internally, Oracle will set the byte length of the column (DBA_TAB_COLUMNS.DATA_LENGTH) to MIN(x * mchw, 4000), where mchw is the maximum byte width of a character in the database character set. This is 1 for US7ASCII or WE8MSWIN1252, 2 for JA16SJIS, 3 for UTF8, and 4 for AL32UTF8.
    For example, a VARCHAR2(3000 CHAR) column in an AL32UTF8 database will be internally defined as having the width of 4000 bytes. It will hold up to 3000 characters from the ASCII range (the character limit), but only 1333 Chinese characters (the byte limit, 1333 * 3 bytes = 3999 bytes). A VARCHAR2(100 CHAR) column in an AL32UTF8 database will be internally defined as having the width of 400 bytes. It will hold up to any 100 Unicode characters.
    The above implies that the CHAR limit works optimally if it is lower than 4000/mchw. With such restriction, the CHAR limit guarantees that the defined number of characters will fit into the column. Because the widest character in any Oracle character set has 4 bytes, if x <= 1000, VARCHAR2(x CHAR) is guaranteed to hold up to x characters in any database character set.
    The declaration VARCHAR2(x):
    - for objects defined in SYS schema means VARCHAR2(x BYTE),
    - for objects defined in other schemas it means VARCHAR2(x BYTE) or VARCHAR2(x CHAR), depending on the value of the NLS_LENGTH_SEMANTICS parameter of the session using the declaration (see the NLS_SESSION_PARAMETERS view).
    After an object is defined, its BYTE vs CHAR semantics is stored in the data dictionary and it does not depend on the NLS_LENGTH_SEMANTICS any longer. Even Export/Import will not change this.
    Character length semantics rules are valid for table columns and for PL/SQL variables.
    -- Sergiusz

  • External Table - possible bug related to record size and total bytes in fil

    I have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
    Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
    Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
    We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
    Should I report this as a bug to Oracle, or am I missing something?
    Thanks,
    Bob

    I have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
    Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
    Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
    We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
    Should I report this as a bug to Oracle, or am I missing something?
    Thanks,
    Bob

  • How not to consider a missing field for external tables

    My Oracle vers. is 10gR2
    I've created an external table using this syntax:
    create table ext_table
    (a number(5),
    b number(5),
    c varchar2(1000))
    organization external
    (type ORACLE_LOADER
    default directory FLAISTD
    access parameters (records delimited by newline
    fields terminated by "#"
    (a char(5),
    b char(5),
    c char(1000)))
    location ('file.csv')
    My problem is this. I've got a file.XLS that I save as file.CSV Sometimes any row of the file.XLS misses of the last column and so in my file.CSV I can have something like this:
    123#123#xxx
    456#456
    and when I try to perform a select * from ext_table I get an error because it expects a missing field.
    How can I do? Can I "say" in the create table above something for warning that the last field might miss?
    Thanks in advance!

    Solomon Yakobson wrote:
    Use TRAILING NULLCOLS:Oops, it is external table not SQL*Loader. So it should be MISSING FIELD VALUES ARE NULL:
    create table ext_table
    (a number(5),
    b number(5),
    c varchar2(1000))
    organization external
    (type ORACLE_LOADER
    default directory TEMP
    access parameters (records delimited by newline
    fields terminated by "#" missing field values are null
    (a char(5),
    b char(5),
    c char(1000)))
    location ('file.csv')
    Table created.
    SQL> select  *
      2    from  ext_table
      3  /
             A          B C
           123        123 xxx
           456        456
    SQL>SY.

Maybe you are looking for