SQL*Loader Index Maintenance

Hi all,
we are loading data from SQL*Loader 9.2.0.3 into Database 11.2.0.3 with direct path load. During Load some Indexes are in state unusable. Acctually I thought that SQL*Loader would also maintain indexes when loading with direct path. But state in dba_indexes is UNUSABLE. Is that the expected behaivior? I couldnt find anything like that in the documentation. Just that it could happen after a successfully/failed load (unique constraint violated etc)...
Thanks in advance

No, no referential constraints on that table... but it's not the expected behavior that Oracle sets the index to unusable and "rebuilds" it at the end of a load, isn't it?
Thanks

Similar Messages

  • Sql -loader and indexes

    Hi
    I'm working with sql-loader in Oracle8. I'm loading a table with one index, ¿can i have any problem with that index?, problems of unusable index or something similar
    Thanks in advance

    If an index is in a direct mode state, it usually means that a direct path sqlload (sqlldr) was run against the underlying table,
    and the indexes were defined at the time of the load.
    If the indexes were left in a direct load state, it means the direct path load failed while re-creating/merging the indexes.
    Any attempt to use such an index will result in ORA-1502:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96525/e1500.htm#1000531

  • SQL Loader direct path loads and unusable indexes

    sorry about all the questions. I am researching several issues. I am reading the Utilities document. It says that in certain circumstances indexes will become unusable. I have some questions about my scenario.
    1. tables partitioned by range
    2. local indexes
    3. all tables have 1 primary key and other indexes are non-unique
    4. all sql loads will go into the most recent partition
    5. users will be querying tables while sql loader is occurring
    6. only one sql loader session will run per table
    7. no foreign keys, triggers, or other constrants other than primary keys.
    The docs are not clear. Do I have a concern about unusable indexes with direct path loads? Will indexes function while the sql loader direct path is occurring(this I can't test since I have small data files now and they load fast, but I will have larger ones in production).
    My understanding is that Extertnal tables using Insert append is exactly the same as sql loader direct path load. Is this true?

    if you dont have anything productive to say how about you don't post at all? you have made ignorant posts like this for years.
    as far as reading the docs what do you think "the docs are not clear" means? By the docs I am referring to the utilities document.
    As far as version number its 10.2 and I forgot that. However, it does not appear that sql loader has really changed all that much over the last few versions.
    Finally I plan on testing it out and its more than a 2 minute test. I wanted to make sure I don't miss anythng in my tests.
    don't respond to any threads or posts I make from now on.

  • SQL Loader, direct path and indexes on partition tables

    Hi,
    I have a big partitioned table and I have two indexes on it.
    When I use SQL Loader to upload data to my table with using OPTIONS (DIRECT=TRUE) UNRECOVERABLE , then it takes almost half hour to load just one record!!
    When I remove OPTIONS (DIRECT=TRUE), then it takes just two seconds to upload the same one record.
    Am I missing anything? Can I use direct path load on indexed partitioned tables and have reasonable load time ?
    An scheduled external job loads almost 100,000 records into this table every hour and I am trying to make the sql*loader performance it as fast as possible.
    Any help would be appreciated,
    Alan

    Hi Alan,
    How big is this table and what sort of indexes are they?
    An index update using SQL*Loader unrecoverable direct path load is achieved by an isolated sort followed by a nologging merge of the old index and the new mini-index into a new index segment (this according to seminar notes by Jonathan Lewis). This will conceivably take a long time for a large table / large index.
    Performance improvements? Are you loading all records into a new partition?
    Cheers,
    Colin

  • Index is unusable after using Direct path of SQL Loader

    When using the Direct path of SQL Loader to load records from a comma(,) seperated file into a table, the index which I specify the records are sorted in becomes unusable. In the log file created by SQL Loader the error: "ORA-01409: NOSORT option may not be used; rows are not in ascending order" occurs. I am pre sorting the data in the order of the index, the index is a multi column index. The index and table prior to initiating SQL Loader are not empty and contain existing records.
    Can anybody suggest a reason as to why this is happening even thought I am pre sorting the data in ascending order as per the index?
    Brad.

    Thanks for reply Gerald!
    After further investigation I kind of answered my own question and found that the data I was loading was not in the order that Oracle was expecting. The problem stems from how the UNIX command SORT works compared to how data is ordered by in Oracle by the ORDER BY statement. I was using UNIX's SORT command to pre sort my data before loading it directly into Oracle.
    UNIX's SORT seems to order Null's before anything else in ascending order, where as Oracle's ORDER BY order's Null values last for ascending order!
    I got around this by putting in my pre-sorted data the highest printing ASCII character when ever a field was Null. The highest printing ASCII character is 'tilde' (~). As UNIX's SORT uses the ASCII collating sequence to determine the sort order the post-sorted data was sorted as per Oracle expected. And to not actually load up "~" into my Oracle table I just specified in the SQL*Loader control file for these fields, NULLIF (field_name = "~").

  • Sql loader direct path leaving indexes unusuable

    10.2.0.3
    redhat 4.5
    temp tablespace 6.5 GB
    datafile size 33k
    I thought in version 10 you can do direct path loads without indexes going unusuable?
    i saw this, but we have plenty of temp space.
    http://dbasolve.com/OracleError/ora/ORA-01502.txt
    Oracle Error:
    ORA-01502 Oracle Index in Unusable State
    Cause and Solution:
    When trying to perform query on Oracle tables with select SQL statement, Oracle returns this error.
    The error indicates an attempt has been made to access an index or index partition that has been marked unusable by a direct load or by a DDL operation.
    The problem usually happens when using the Direct Path for the SQL*Loader, Direct Load or DDL operations. This requires enough temporary space to build all indexes of the table. If there is no enough space in TEMP tablespace, all rows will still be loaded and imported, but the indices are left with STATUS = 'INVALID'.
    Invalid indexes can be checked with down SQL statement.
    SQL>SELECT * from USER_INDEXES WHERE STATUS = 'INVALID';
    Solution to this error is simple. You can:
    1. Drop the specified index and/or recreate the index
    2. Rebuild the specified index
    3. Rebuild the unusable index partition
    Generally, the following SQL manipulation language will be able to rebuild the unusable index:
    SQL>ALTER INDEX index_name REBUILD online

    Hi Alan,
    How big is this table and what sort of indexes are they?
    An index update using SQL*Loader unrecoverable direct path load is achieved by an isolated sort followed by a nologging merge of the old index and the new mini-index into a new index segment (this according to seminar notes by Jonathan Lewis). This will conceivably take a long time for a large table / large index.
    Performance improvements? Are you loading all records into a new partition?
    Cheers,
    Colin

  • Problem specifying SQL Loader Log file destination using EM

    Good evening,
    I am following the example given in the 2 Day DBA document chapter 8 section 16.
    In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
    The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
    Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
    Thank you for your help,
    John.
    DDL (generated using SQL developer, you may want to change the space allocated to be less)
    CREATE TABLE "NICK"."PURCHASE_ORDERS"
        "PO_NUMBER"      NUMBER NOT NULL ENABLE,
        "PO_DESCRIPTION" VARCHAR2(200 BYTE),
        "PO_DATE" DATE NOT NULL ENABLE,
        "PO_VENDOR" NUMBER NOT NULL ENABLE,
        "PO_DATE_RECEIVED" DATE,
        PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 67108864
      TABLESPACE "USERS" ;
    Load.dat file contents
    1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
    2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
    3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
    Steps I am carrying out in EM
    log in, select data movement -> Load Data from User Files
    Automatically generate control file
    (enter host credentials that work on your machine)
    continue
    Step 1 of 7 ->
      Data file is located on your browser machine
      "Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
       click next
    step 2 of 7 ->
      Table Name
      nick.purchase_orders
      click next
    step 3 of 7 ->
      click next
    step 4 of 7 ->
      click next
    step 5 of 7 ->
      Generate log file where logging information is to be stored
      Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
      Validation Error
      Examine and correct the following errors, then retry the operation:
      LogFile - The directory does not exist.

    Hi John,
    But, i did'nt found any error when i am going the same what you did.
    My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
    1.I created one table in scott schema :
    SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
      2  (
      3      "PO_NUMBER"      NUMBER NOT NULL ENABLE,
      4      "PO_DESCRIPTION" VARCHAR2(200 BYTE),
      5      "PO_DATE" DATE NOT NULL ENABLE,
      6      "PO_VENDOR" NUMBER NOT NULL ENABLE,
      7      "PO_DATE_RECEIVED" DATE,
      8      PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      9  )
    10  TABLESPACE "USERS";
    Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
    Here i total 3 text boxes :
    1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
    3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
    Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
    Step 3 of 7 I just clicked Next
    Step 4 of 7 I just clicked Next
    Step 5 of 7 I just clicked Next
    Step 6 of 7 I just clicked Next
    Step 7 of 7 Here it is Control File Contents:
    LOAD DATA
    APPEND
    INTO TABLE scott.PURCHASE_ORDERS
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    PO_NUMBER INTEGER EXTERNAL,
    PO_DESCRIPTION CHAR,
    PO_DATE DATE,
    PO_VENDOR INTEGER EXTERNAL,
    PO_DATE_RECEIVED DATE
    And i just clicked on submit job.
    Now i got all 3 rows in purchase_orders :
    SCOTT@orcl> select count(*) from purchase_orders;
      COUNT(*)
             3So, there is no bug, it worked and please retry if you get any error/issue.
    HTH
    Girish Sharma

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

  • SQL*Loader and integrity constraints

    I am running into trouble with integrity constraints in my SQL*Loader script runs. Should I be going up to unix and removing the constraints from sqlplus, then asking the wizard to reinstall them after the data load? Is this documented somewhere? I can't find it. Sorry for asking such a basic question.
    Thanks,
    Ann Cantelow

    Hi,
    Your approach is correct.
    Create tables first
    Move data via the scripts
    Create contraints, indexes, primary keys
    See the User Guide for the ORacle Migration Workbench http://otn.oracle.com/tech/migration/workbench
    Regards
    John

  • SQL*Loader : ORA-19007 with missing Schema Location Hint

    Hi,
    These days I'm stuck with a 10.1.0.4 database (OS Linux Red Hat), and I'm trying to find a workaround for the following situation :
    XML schema (sfgtest.xsd)
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
               xmlns:xdb="http://xmlns.oracle.com/xdb">
      <xs:element name="Code" xdb:SQLName="Code" xdb:SQLType="VARCHAR2">
        <xs:simpleType>
          <xs:restriction base="xs:string">
            <xs:maxLength value="16"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Val" xdb:SQLType="NUMBER" xdb:SQLName="Val">
        <xs:simpleType>
          <xs:restriction base="xs:decimal">
            <xs:totalDigits value="12"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Chq">
        <xs:complexType xdb:SQLType="SFG_CHQ_TYPE">
          <xs:sequence>
            <xs:element ref="Code"/>
            <xs:element ref="Val"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="NbRem" xdb:SQLType="NUMBER" xdb:SQLName="NbRem">
        <xs:simpleType>
          <xs:restriction base="xs:integer">
            <xs:totalDigits value="5"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Rmbt" xdb:defaultTable="TEST_XML_SFG">
        <xs:complexType xdb:SQLType="SFG_RMBT_TYPE">
          <xs:sequence>
            <xs:element ref="NbRem"/>
            <xs:element ref="Rem" maxOccurs="unbounded" xdb:SQLCollType="SFG_REM_COLL"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="CodeAff" xdb:SQLName="CodeAff" xdb:SQLType="VARCHAR2">
        <xs:simpleType>
          <xs:restriction base="xs:string">
            <xs:maxLength value="12"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Rem" xdb:SQLName="Rem">
        <xs:complexType xdb:SQLType="SFG_REM_TYPE">
          <xs:sequence>
            <xs:element ref="CodeAff"/>
            <xs:element ref="Chq" maxOccurs="unbounded" xdb:SQLCollType="SFG_CHQ_COLL"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
    </xs:schema>
    Sample document (sfgtest.xml)
    <?xml version="1.0" encoding="iso-8859-1"?>
    <Rmbt>
    <NbRem>3</NbRem>
    <Rem>
      <CodeAff>AFF001</CodeAff>
      <Chq><Code>X01001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X01002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X01003</Code><Val>10.00</Val></Chq>
    </Rem>
    <Rem>
      <CodeAff>AFF002</CodeAff>
      <Chq><Code>X02001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X02002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X02003</Code><Val>10.00</Val></Chq>
    </Rem>
    <Rem>
      <CodeAff>AFF003</CodeAff>
      <Chq><Code>X03001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03003</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03004</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03005</Code><Val>10.00</Val></Chq>
    </Rem>
    </Rmbt>
    Schema registration
    begin
    dbms_xmlschema.registerSchema(
    schemaURL => 'sfgtest.xsd',
    schemaDoc => xmltype(bfilename('DUMP_DIR','sfgtest.xsd'),nls_charset_id('AL32UTF8')),
    local => true,
    genTypes => true,
    genTables => false
    end;
    Table creation
    create table test_xml_sfg of xmltype
    xmltype store as object relational
    xmlschema "sfgtest.xsd"
    element "Rmbt"
    varray xmldata."Rem" store as table test_xml_sfg_rem_tab
    ( primary key (NESTED_TABLE_ID, ARRAY_INDEX) ) organization index overflow
    varray "Chq" store as table test_xml_sfg_chq_tab
      ( primary key (NESTED_TABLE_ID, ARRAY_INDEX) ) organization index overflow
    );I originally wanted to use SQL*Loader to test performance, as I may have to load multiple files in the same time.
    It works great on 10gR2 and 11gR2 with XML files up to 100 MB loaded in a matter of minutes.
    However, a little issue with 10gR1 :
    SQLLDR control file
    LOAD DATA
    INFILE 'filelist.txt'
    APPEND
    INTO TABLE test_xml_sfg
    XMLTYPE(XMLDATA) (
    filename filler char(260),
    XMLDATA LOBFILE(filename) TERMINATED BY EOF
    )with filelist.txt :
    sfgtest.xmlExecution throws "ORA-19007: Schema - does not match expected sfgtest.xsd".
    As per the documentation, it's expected behaviour in 10.1 :
    http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10790/xdb03usg.htm#BABECCBG
    The XML document must include the appropriate attributes from the XMLSchema-instance namespace or the XML document must be explicitly associated with the XML schema using the XMLType constructor or the createSchemaBasedXML() method.Also as expected, the file is loaded OK after adding the following in the root element :
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="sfgtest.xsd"But, as I don't have any latitude on the files received (i.e. I can't add the location hint), is there some workaround, when using SQL*Loader, to treat the XML file as an instance document?
    Thanks for any solution/idea.

    Not sure if it will work, but out of the top of my head I would make an attempt to create an view (for update) based on the XMLType table and do the insert via the view, matching the incoming XML doc to the schema via
    xmltype(xml_messsage).createSchemaBasedXML(xml_schema)

  • Sql loader not able to load more than 4.2 billion rows

    Hi,
    I am facing a problem with Sql loader utility.
    The Sql loader process stops after loading 342 gigs of data i.e it seems to load 4.294 billion rows out of 6.9 billion rows and the loader process completes without throwing any error.
    Is there any limit on the volume of data sql loader can load ?
    Also how to identify that SQL loader process has not loaded whole data from the file as it is not throwing any error ?
    Thanks in Advance.

    it may be a prob with the db config not in the sql loader...
    check all the parameters
    Maximizing SQL*Loader Performance
    SQL*Loader is flexible and offers many options that should be considered to maximize the speed of data loads. These include:
    1. Use Direct Path Loads - The conventional path loader essentially loads the data by using standard insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates blocks in Oracle database block format. The fact that SQL is not being issued makes the entire process much less taxing on the database. There are certain cases, however, in which direct path loads cannot be used (clustered tables). To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.
    2. Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader.
    3. Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of calls to the database and increase performance. The size of the bind array is specified using the bindsize parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the maximum length of each row.
    4. Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter specifies the number of rows per commit. Issuing fewer commits will enhance performance.
    5. Use Parallel Loads. Available with direct path data loads only, this option allows multiple SQL*Loader jobs to execute concurrently.
    $ sqlldr control=first.ctl parallel=true direct=true
    $ sqlldr control=second.ctl parallel=true direct=true
    6. Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the data. The savings can be tremendous, depending on the type of data and number of rows.
    7. Disable Archiving During Load. While this may not be feasible in certain environments, disabling database archiving can increase performance considerably.
    8. Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the data to the redo logs. This option is available for direct path loads only.
    Edited by: 879090 on 18-Aug-2011 00:23

  • SQL*Loader-350: Illegal combination of non-alphanumeric characters

    Hi all,
    how to skip a column in control file.
    i.e.
    I have a table with 10 columns and FAT file contains 11 columns.
    how to skip a last column ?
    and
    I added extra column to Table and I changed the control file too.
    but i am getting error.
    SQL*Loader-350: Syntax error at line 115.
    Illegal combination of non-alphanumeric characters
    thanks in Advance.

    Tom Kyte has an elaborate solution on his page:
    http://osi.oracle.com/~tkyte/SkipCols/index.html
    Or post the table description and the control file, so we can have a look at it.

  • SQL Loader loads duplicate records even when there is PK defined

    Hi,
    I have created table with pk on one of the column.Loaded the table using sql loader.the flat file has duplicate record.It loaded all the records without any error but now the index became unusable.
    The requirement is to fail the process if there are any duplicate.Please help me in understaing why this is happening.
    Below is the ctl file
    OPTIONS(DIRECT=TRUE, ERRORS=0)
    UNRECOVERABLE
    load data
    infile 'test.txt'
    into table abcdedfg
    replace
    fields terminated by ',' optionally enclosed by '"'
    col1 ,
    col2
    i defined pk on col1

    Check out..
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1457
    It states...
    During a direct path load, some integrity constraints are automatically disabled. Others are not. For a description of the constraints, see the information about maintaining data integrity in the Oracle Database Application Developer's Guide - Fundamentals.
    Enabled Constraints
    The constraints that remain in force are:
    NOT NULL
    UNIQUE
    PRIMARY KEY (unique-constraints on not-null columns)Since OP has the primary key in place before starting the DIRECT path load, this is contradictory to what the documentation says or probably a bug?

  • Need faster data loading (using sql-loader)

    i am trying to load approx. 230 million records (around 60-bytes per record) from a flat file into a single table. i have tried sql-loader (conventional load) and i'm seeing performance degrade as the file is being processed. i am avoiding direct path sql-loading because i need to maintain uniqueness using my primary key index during the load. so the degradation of the load performance doesn't shock me. the source data file contains duplicate records and may contain records that are duplicates of those that are already in the table (i am appending during sql-loader).
    my other option is to unload the entire table to a flat file, concatenate the new file onto it, run it through a unique sort, and then direct-path load it.
    has anyone had a similar experience? any cool solutions available that are quick?
    thanks,
    jeff

    It would be faster to suck into a Oracle table, call it a temporary table and then make a final move into the final table.
    This way you could direct load into an oracle table then you could
    INSERT /*+ APPEND */ INTO Final_Table
        SELECT DISTINCT *
        FROM   Temp_Table
        ORDER BY ID;This would do a 'direct load' type move from your temp teable to the final table, which automatically merging the duplicate records;
    So
    1) Direct Load from SQL*Loader into temp table.
    2) Place index (non-unique) on temp table column ID.
    3) Direct load INSERT into the final table.
    Step 2 may make this process faster or slower, only testing will tell.
    Good Luck,
    Eric Kamradt

  • Sql loader issue char to date

    Hi
    I have a text file like this:
    logs~-~189.138.221.234~[19/Nov/2007:18:39:53 +0100]~mujer.orange.es~/mujer.woo/home/home/index.html~mujer.woo~home~home~index.html~Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; FunWebProducts; InfoPath.2)~bid=1195493283-1925481236; __em_p=1195493283-1925481236%26bid; GUID=0002D029C7BD07412A7C28F861626364~-~?ord=4939922038660
    Im trying to load via sql loader to a table like this:
    CREATE TABLE SPY_LOGS_FLOGS
    TYPE VARCHAR2(10 BYTE),
    PROXY VARCHAR2(30 BYTE),
    IP VARCHAR2(30 BYTE),
    DATETIME DATE,
    REFERER VARCHAR2(100 BYTE),
    SPY VARCHAR2(100 BYTE),
    SPY0 VARCHAR2(100 BYTE),
    SPY1 VARCHAR2(100 BYTE),
    SPY2 VARCHAR2(100 BYTE),
    SPY3 VARCHAR2(100 BYTE),
    BROWSER VARCHAR2(300 BYTE),
    COOKIE VARCHAR2(100 BYTE),
    UNKNOWN VARCHAR2(100 BYTE),
    QS VARCHAR2(100 BYTE)
    Im using a control similar to this:
    LOAD DATA
    infile '../files/spy_logs_flogs'
    append
    into table spy_logs_flogs
    FIELDS TERMINATED BY '~'
    TRAILING NULLCOLS
    TYPE char,
    PROXY char,
    IP char,
    DATETIME date 'to_date(:DATETIME,'"["yyyymmdd hh24:mi:ss"] +100"')' ,
    REFERER char,
    SPY char,
    SPY0 char,
    SPY1 char,
    SPY2 char,
    SPY3 char,
    BROWSER char,
    COOKIE char,
    UNKNOWN char,
    qs char
    Im trying to transform the DATETIME field from [19/Nov/2007:18:39:53 +0100] to date 19/10/2007 18:39:53, but have tested with control and cant find the way to do it.
    Any help will be appreciate.

    Play with
    select substr('[19/Nov/2007:18:39:53 +0100]',2,20) the_date
      from dualand if you get 19/Nov/2007:18:39:53 proceed to
    select to_date('19/Nov/2007:18:39:53','DD/MON/YYYY:HH24:MI:SS')
      from dual
    select to_date('19/Nov/2007:18:39:53','DD/Mon/YYYY:HH24:MI:SS')
      from dual
    select to_date('19/Nov/2007 18:39:53','DD/MON/YYYY HH24:MI:SS')
      from dual
    select to_date('19/Nov/2007 18:39:53','DD/Mon/YYYY HH24:MI:SS')
      from dualperhaps one of those will work (if it's the one without : then you must replace it with space using substr and || before using to_date)
    Regards
    Etbin

Maybe you are looking for

  • Dissapearing letters..

    Hi, Here's the senario: I created some text "Welcome", and then made it a graphic symbol on frame 1. On frame 30, I moved the symbol and enlarged it with free transform, and created a motion tween between the two frames. The tween works fine, except

  • Podcast Export. Garageband 3.0.2. OS X 10.3.9

    This topic has been passed around a few times here. But I just want to add my experience. On the box for iLife 6 it clearly says that system requirements for iLife 6 to work are OS 10.3.9 or 10.4.4 or higher. Well it doesn't work properly under 10.3.

  • Sort in a lov

    Hi, I am using designer 6i for generating forms... How can I sort a lov with an order witch is different than the one used by default ( I have in a look up table...) example : I want to sort in alphabetical order of a libelle... thanks for help!

  • Displaying the full song title

    I have songs with long titles, is there a way to display everything?

  • Developing a web site on the Mac

    Hi, Just wanted to get some tips for developing a website on the Mac. 1. iWeb (I have iWeb that came with iLife '06) seems interesting enough. But is there a place where some more professional looking templates can be downloaded or bought? 2. What ar