SQL*LOADER VARRAY SQL

Hi,
following case of a ctl-file:
LOAD DATA
APPEND
INTO TABLE IFC_A_T01
TRAILING NULLCOLS
ID                    SEQUENCE,
SKZ POSITION(1:1) TERMINATED BY '$' "RTRIM(LTRIM(:SKZ,' '),' ')",
sub object
FKZ VARRAY TERMINATED BY '$' (
FKZ COLUMN OBJECT (
GS POSITION(1:5) "RTRIM(FKZ.FKZ.GS,' ')",
FZ POSITION(6:10),
FN POSITION(11:14),
FF POSITION(15:17)
SQL*Loader-350: Syntax error at line 11.
Expecting valid column specification, "," or ")", found "LTRIM(:"FKZ.FKZ.GS",' ')".
KZ.GS\",' ')" ,
FZ
What is the Problem with Line 11??
--> GS POSITION(1:5) "RTRIM(FKZ.FKZ.GS,' ')",
I don't care about using that syntax, but i really think so, because GS POSITION(1:5) NULLIF FKZ.FKZ.GS=BLANKS works fine...
thanks in advance.

Hi,
I'm not 100% sure that's possible to do.
sorry that's possible (with some restrictions):
http://download.oracle.com/docs/cd/B12037_01/server.101/b10825/ldr_loading.htm#i1007180
however another alternative would be to create an external table pointing on your file and then do an insert as select?
here is an example (this can be done with multiple levels):
CREATE OR REPLACE TYPE xtest_Type1 AS OBJECT(
   AGE   NUMBER(3),
   NAME  VARCHAR2(14)
CREATE OR REPLACE TYPE xtest_Type2 AS VARRAY(10) OF xtest_Type1;
CREATE TABLE xtest(col1 NUMBER,col2 xtest_Type2)
insert into xtest
with testdata as(
select 1 col1,10 age1,'name1' name1 from dual union all
select 2 col1,20 age1,'name2' name1 from dual union all
select 1 col1,30 age1,'name3' name1 from dual union all
select 2 col1,10 age1,'name1' name1 from dual union all
select 2 col1,40 age1,'name4' name1 from dual)
SELECT   col1,
         CAST(COLLECT(xtest_Type1(age1, name1)) AS xtest_Type2) col2
FROM     testdata
GROUP BY col1... remove the with clause and replace "testdata" with your external table name..
Edited by: user11268895 on Aug 30, 2010 3:07 PM

Similar Messages

  • Loading VARRAY's with SQL loader, direct path in 9i?

    Isn't it possible to load VARRAY's with SQLloader and direct path in a Oracle 9i database?
    /Magnus Hornstrom
    mailto:[email protected]

    Daniel,
    I appreciate your response alot.
    Now a follow on. You say that SQL*Loader is the fastest way to get data into Oracle Spatial even though the conventional path. How does SQL*Loader do this? Doesn't it use OCI?
    I just can't help but think that coverting my data to SQL*Loader format, and then having it parse it and send it to the database in some form must be slower than me just sending whatever SQL Loader sends to the DB myself using OCI. Am I missing something?
    The SQL Loader documentation seems to suggest that SQL*Loader just turns the input into alot of INSERT stateemnts. If so, can't I just do this?
    My performance with OCI and INSERT statements isn't very good, but I believe I am doing one transaction per insert. Might I be as well off to concentrate on fine tuning my OCI based code using INSERTs?
    I will actually do some time tests myself, but I would appreciate your opinion.
    Once again thanks for the great info you have provided.

  • Using Sqlldr to load VARRAYs!!!!!!!

    HI,
    I've created a collection type within the database the description of which reads as follows.....
    SQL> desc LISTVARCHAR
    LISTVARCHAR VARRAY(200) OF VARCHAR2(4000)
    But when I try to insert a value of more than 255 characters using sqlldr, it seems to fail giving the following message, but works fine from within sqlplus!
    Record 1: Rejected - Error on table TESTER, column COL1.
    Field in data file exceeds maximum length
    ****** Sample Data:
    |ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUV|
    Shouldn't this Column take as many as 4000 characters for each element within that VARRAY? Please let me know if I'm missing anything.
    Thanks a lot for your time
    Chandra M.

    You will able to find good information about it in this document :
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96652.pdf
    Joel P�rez

  • SQL*Loader halt loading large varray object

    I was using sql*loader of Oracle 8i NT, trying to load some records with
    varray (some records have lots of array elements, up to several thousands).
    however, after several hundreds records loaded, sql*loader became very slow,
    and virtually stopped at some point. I watched the system resouces taken by
    sql*loader, it simply drove my NT out of physical memory (it occupied
    hundreds meg of physical mem). I set virtual memory to very large, and
    didn't help neither. My whole datafile is only 60MB, although several lines
    have 250K chars in single line/record. but such record only takes sql*loader
    one minute to load if I load it individually.
    However, if I load the records 100 after 100 in "append" mode (loading 100,
    then load next 100 with skipping previous loaded records), it works fine,
    the loader only occupied 60 meg physical mem, and released the mem when I
    started next 100 manually. This is really bizzare for that sql*loader seems
    doesn't know how to unload the memory if I chose to load the whole data file
    automatically. I tried to manipulate the ROWS and BINDSIZE options, doesn't
    help much.
    Does anyone has any idea about this strange thing? Is there any other way to
    load data into Oracle tables? I can't believe sql*loader will take several
    days to load only 60mb exteral text file.
    Thanks!
    John
    null

    There is no 'setDescription' method available with the ordimage type. You can use putMetadata if that works for you.
    Otherwise, you would have to build a custom data-type based on the ordimage type in order to store your 'description'.

  • SQL*Modeler - creation of VARRAY or Collection Type of scalar type errors

    In SQl*Modeler 2.0.0 Build 584, I can create either VARRAY or Collections. These work fine for usre defined structured types but I encounter huge problems when I use a simple scalar type of number or varchar2.
    For instance I create a new collection type, give it a name, specify its a collection (or VARRAY, same problem) then click datatype. On the select Data type box I select logical type. A new window opens, I select VARCHAR from the drop down list.
    Enter the size 15, everything appears fine. I click OK, the select data type screen now shows a logical type of VARCHAR(15).
    So far I'm happy. I generate the DDL, everthing is fine, the DDL contains my collection of VARCHAR2(15).
    Now I save the model, close it and re-open the same model. My collection is now of VARCHAR so the next time I generate it will get an error because the syntax is wrong because it has no length. Same problem happens when selecting a NUMBER, looses the precision and scale but at least that command still works, just with a maximum numeric value.
    Ok, so lets try creating distinct types. Why we can't access domains when specifying types from here remains a mystery to me.
    So I create a distinct type Varchar2_15 which is of Logical type VARCHAR and give it a size. Similarly, create another distinct type of NUMERIC_22_0 precision 22 scale 0. This seems to get around the problem of losing the data but the DDL generated shows the datatype to be either VARCHAR (not VARCHAR2) and NUMERIC(22), not number(22). Now I know that VARCHAR currently maps to VARCHAR2 but isn't guaranteed to in the future (even though its been like that since V6) and NUMERIC is just an alias for NUMBER but its going to confuse a lot of java people and its totally inconsitent and just plain wrong.
    Any suggestions or workarounds will be gratefully received.
    Ian Bainbridge

    Hi Ian,
    I see a bug in save/load of collection types and as result no size or precision and scale information. It's fixed in new release.
    However I cannot reproduce the problem with distinct types - I have them generated as varchar2 and number (this is for Oracle).
    You can check:
    - database you use in DDL generation - I got varchar and numeric for MS SQL Server;
    - mapping of logical types VARCHAR and NUMERIC to native types in "Types Administration".
    Philip
    PS - I was able to reproduce it - I looked at wrong place - DDL generation for collection types is broken - it's ok for columns. I logged bug for that.
    Edited by: Philip Stoyanov on Jun 28, 2010 8:55 PM

  • SQL*Loader : ORA-19007 with missing Schema Location Hint

    Hi,
    These days I'm stuck with a 10.1.0.4 database (OS Linux Red Hat), and I'm trying to find a workaround for the following situation :
    XML schema (sfgtest.xsd)
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
               xmlns:xdb="http://xmlns.oracle.com/xdb">
      <xs:element name="Code" xdb:SQLName="Code" xdb:SQLType="VARCHAR2">
        <xs:simpleType>
          <xs:restriction base="xs:string">
            <xs:maxLength value="16"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Val" xdb:SQLType="NUMBER" xdb:SQLName="Val">
        <xs:simpleType>
          <xs:restriction base="xs:decimal">
            <xs:totalDigits value="12"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Chq">
        <xs:complexType xdb:SQLType="SFG_CHQ_TYPE">
          <xs:sequence>
            <xs:element ref="Code"/>
            <xs:element ref="Val"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="NbRem" xdb:SQLType="NUMBER" xdb:SQLName="NbRem">
        <xs:simpleType>
          <xs:restriction base="xs:integer">
            <xs:totalDigits value="5"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Rmbt" xdb:defaultTable="TEST_XML_SFG">
        <xs:complexType xdb:SQLType="SFG_RMBT_TYPE">
          <xs:sequence>
            <xs:element ref="NbRem"/>
            <xs:element ref="Rem" maxOccurs="unbounded" xdb:SQLCollType="SFG_REM_COLL"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="CodeAff" xdb:SQLName="CodeAff" xdb:SQLType="VARCHAR2">
        <xs:simpleType>
          <xs:restriction base="xs:string">
            <xs:maxLength value="12"/>
          </xs:restriction>
        </xs:simpleType>
      </xs:element>
      <xs:element name="Rem" xdb:SQLName="Rem">
        <xs:complexType xdb:SQLType="SFG_REM_TYPE">
          <xs:sequence>
            <xs:element ref="CodeAff"/>
            <xs:element ref="Chq" maxOccurs="unbounded" xdb:SQLCollType="SFG_CHQ_COLL"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
    </xs:schema>
    Sample document (sfgtest.xml)
    <?xml version="1.0" encoding="iso-8859-1"?>
    <Rmbt>
    <NbRem>3</NbRem>
    <Rem>
      <CodeAff>AFF001</CodeAff>
      <Chq><Code>X01001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X01002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X01003</Code><Val>10.00</Val></Chq>
    </Rem>
    <Rem>
      <CodeAff>AFF002</CodeAff>
      <Chq><Code>X02001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X02002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X02003</Code><Val>10.00</Val></Chq>
    </Rem>
    <Rem>
      <CodeAff>AFF003</CodeAff>
      <Chq><Code>X03001</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03002</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03003</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03004</Code><Val>10.00</Val></Chq>
      <Chq><Code>X03005</Code><Val>10.00</Val></Chq>
    </Rem>
    </Rmbt>
    Schema registration
    begin
    dbms_xmlschema.registerSchema(
    schemaURL => 'sfgtest.xsd',
    schemaDoc => xmltype(bfilename('DUMP_DIR','sfgtest.xsd'),nls_charset_id('AL32UTF8')),
    local => true,
    genTypes => true,
    genTables => false
    end;
    Table creation
    create table test_xml_sfg of xmltype
    xmltype store as object relational
    xmlschema "sfgtest.xsd"
    element "Rmbt"
    varray xmldata."Rem" store as table test_xml_sfg_rem_tab
    ( primary key (NESTED_TABLE_ID, ARRAY_INDEX) ) organization index overflow
    varray "Chq" store as table test_xml_sfg_chq_tab
      ( primary key (NESTED_TABLE_ID, ARRAY_INDEX) ) organization index overflow
    );I originally wanted to use SQL*Loader to test performance, as I may have to load multiple files in the same time.
    It works great on 10gR2 and 11gR2 with XML files up to 100 MB loaded in a matter of minutes.
    However, a little issue with 10gR1 :
    SQLLDR control file
    LOAD DATA
    INFILE 'filelist.txt'
    APPEND
    INTO TABLE test_xml_sfg
    XMLTYPE(XMLDATA) (
    filename filler char(260),
    XMLDATA LOBFILE(filename) TERMINATED BY EOF
    )with filelist.txt :
    sfgtest.xmlExecution throws "ORA-19007: Schema - does not match expected sfgtest.xsd".
    As per the documentation, it's expected behaviour in 10.1 :
    http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10790/xdb03usg.htm#BABECCBG
    The XML document must include the appropriate attributes from the XMLSchema-instance namespace or the XML document must be explicitly associated with the XML schema using the XMLType constructor or the createSchemaBasedXML() method.Also as expected, the file is loaded OK after adding the following in the root element :
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="sfgtest.xsd"But, as I don't have any latitude on the files received (i.e. I can't add the location hint), is there some workaround, when using SQL*Loader, to treat the XML file as an instance document?
    Thanks for any solution/idea.

    Not sure if it will work, but out of the top of my head I would make an attempt to create an view (for update) based on the XMLType table and do the insert via the view, matching the incoming XML doc to the schema via
    xmltype(xml_messsage).createSchemaBasedXML(xml_schema)

  • SQL*Loader-971: parallel load option not allowed when loading lob columns

    Hi,
    I am trying to load a table, which has a VARRAY column, using DIRECT=TRUE and PARALLEL=TRUE through
    Sql *Loader 10.2.0.4.0
    OS: Sun Solaris 10 SPARC 64-bit,
    Database: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
    The following error recieved:
    SQL*Loader-971: parallel load option not allowed when loading lob columns
    Please help me to resolve..
    Thanks and regrds
    Anji

    user8836881 wrote:
    Hi,
    I am trying to load a table, which has a VARRAY column, using DIRECT=TRUE and PARALLEL=TRUE through
    Sql *Loader 10.2.0.4.0
    OS: Sun Solaris 10 SPARC 64-bit,
    Database: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
    The following error recieved:
    SQL*Loader-971: parallel load option not allowed when loading lob columns
    Please help me to resolve..
    Thanks and regrds
    Anjihttp://tinyurl.com/yhxdhnt

  • Problem with field-length in sql-loader

    Hello,
    (sorry I see it's the wrong forum -> SQL-Developer, I searched for SQL-Loader, is there a possibility to change the forum ?)
    I can't find an answer for my question at google, so I hope there is someone in this forum who can help me.
    I have a dat-File that contains 12 500 000 records and want to laod it via sql-loader. The first field contains the ID and there were numbers from 1 to 12 500 000 stored.
    When I run the sql-loader, the ID run up to 9 999 999 and then, for the last 2 500 001 records, ist starts at 1 again.
    I noticed a few things :
    1. Numbers < 10 000 000 don't make Problems
    2. Numbers >= 10 000 000 make Problems, the first digit ( in this example "1") is cut, so the number "10 000 001" ist stored as "1". It comes to double entries (IDs 1 to 2 500 000).
    3. The same field-definition, I have for the third field of the record -> there is no Problem. THERE I can store any number.
    4. I tried to store a number > 100 000 000 -> the first digit was cut too, but ONLY the first digit.
    5. I'm able to store any number manually in the Database.
    So, I have a problem with the first field. If the number is greater then 10 000 000, the first Number is cut. It doesn't make any differance, if the number is 10 000 000 or 999 999 999, just the first digit, in the first field, is cut.
    Any idea ??????????
    Here some infos :
    Database :
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    SQL-PLUS :
    SQL*Plus: Release 10.2.0.4.0 - Production
    Script sqlldr :
    sqlplus ${schema}/$2 <<EOF >>LOAD.LOG
    set timing on
    set echo on
    set heading off
    set heading on
    !sqlldr userid=${schema}/$2 control=surface_geometry.ctl log=surface_geometry.log
    exit
    EOF
    ctl-File :
    LOAD DATA
    INFILE imp_surface_geometry_test2
    TRUNCATE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE SURFACE_GEOMETRY
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS (
    ID INTEGER EXTERNAL ,
    GMLID,
    GMLID_CODESPACE,
    PARENT_ID NULLIF PARENT_ID = BLANKS,
    ROOT_ID NULLIF ROOT_ID = BLANKS,
    IS_SOLID,
    IS_COMPOSITE,
    IS_TRIANGULATED,
    IS_XLINK,
    IS_REVERSE,
    GEB_ID,
    GEOMETRY COLUMN OBJECT
    SDO_GTYPE INTEGER EXTERNAL,
    SDO_SRID CONSTANT 31468,
    SDO_ELEM_INFO VARRAY TERMINATED BY '|/'
    (X FLOAT EXTERNAL),
    SDO_ORDINATES VARRAY TERMINATED BY '|/'
    (X FLOAT EXTERNAL)
    Table-Definition (sql-File) :
    CREATE TABLE SURFACE_GEOMETRY (
    ID NUMBER,
    GMLID VARCHAR2(256),
    GMLID_CODESPACE VARCHAR2(1000),
    PARENT_ID NUMBER,
    ROOT_ID NUMBER,
    IS_SOLID NUMBER(1,0),
    IS_COMPOSITE NUMBER(1,0),
    IS_TRIANGULATED NUMBER(1,0),
    IS_XLINK NUMBER(1,0),
    IS_REVERSE NUMBER(1,0),
    GEB_ID CHAR(7),
    GEOMETRY MDSYS.SDO_GEOMETRY,
    CONSTRAINT c_unique_id UNIQUE (ID))
    storage (initial 1M next 1M maxextents 1024) ;
    Some Entries in the dat-File :
    12556067| |XXX|12556066|12556066|0|0|0|0|0| |
    #3003|1|1003|1|/
    #4479400.000000|5333360.000000| 526.870000|4479380.000000|5333360.000000| 526.720000|4479400.000000|5333340.000000| 526.980000|4479400.000000|5333360.000000| 526.870000|/
    12556068| |XXX| |12556068|0|0|1|0|0| |
    #||/
    #|/
    12556069| |XXX|12556068|12556068|0|0|0|0|0| |
    #3003|1|1003|1|/
    #4479380.000000|5333380.000000| 526.600000|4479380.000000|5333360.000000| 526.720000|4479400.000000|5333360.000000| 526.870000|4479380.000000|5333380.000000| 526.600000|/
    log-File : (100 records for the test)
    SQL*Loader: Release 10.2.0.4.0 - Production on Fr Mai 28 15:16:43 2010
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    Kontrolldatei: surface_geometry.ctl
    Datendatei: imp_surface_geometry_test.dat
    Fehlerdatei: imp_surface_geometry_test.bad
    Datei für zurückgewiesene Sätze: nichts spezifiziert
    (alle Discards zulassen)
    Zu ladende Anzahl: ALL
    Zu überspringende Anzahl: 0
    Zulässige Fehler: 50
    Bind-Array: 64 Zeilen, maximal 256000 Bytes
    Fortsetzung: 1:1 = 0X23(Zeichen '#'), im nächsten physischen Satz
    Benutzer Pfad: Konventionell
    Tabelle SURFACE_GEOMETRY, geladen von jedem logischen Satz.
    Insert-Option in Kraft für diese Tabelle: TRUNCATE
    Option TRAILING NULLCOLS ist wirksam
    Spaltenname Position Läng Term Eing Datentyp
    ID FIRST * | CHARACTER
    GMLID NEXT * | CHARACTER
    GMLID_CODESPACE NEXT * | CHARACTER
    PARENT_ID NEXT * | CHARACTER
    NULL wenn PARENT_ID = BLANKS
    ROOT_ID NEXT * | CHARACTER
    NULL wenn ROOT_ID = BLANKS
    IS_SOLID NEXT * | CHARACTER
    IS_COMPOSITE NEXT * | CHARACTER
    IS_TRIANGULATED NEXT * | CHARACTER
    IS_XLINK NEXT * | CHARACTER
    IS_REVERSE NEXT * | CHARACTER
    GEB_ID NEXT * | CHARACTER
    GEOMETRY DERIVED * COLUMN OBJECT
    *** Felder in GEOMETRY
    SDO_GTYPE NEXT * | CHARACTER
    SDO_SRID CONSTANT
    Wert ist '31468'
    SDO_ELEM_INFO DERIVED * VARRAY
    Abschlusszeichenfolge : '|/'
    *** Felder in GEOMETRY.SDO_ELEM_INFO
    X FIRST * | CHARACTER
    *** Feldende in GEOMETRY.SDO_ELEM_INFO
    SDO_ORDINATES DERIVED * VARRAY
    Abschlusszeichenfolge : '|/'
    *** Felder in GEOMETRY.SDO_ORDINATES
    X FIRST * | CHARACTER
    *** Feldende in GEOMETRY.SDO_ORDINATES
    *** Feldende in GEOMETRY
    Tabelle SURFACE_GEOMETRY:
    100 Zeilen erfolgreich geladen.
    0 Zeilen aufgrund von Datenfehlern nicht geladen.
    0 Zeilen nicht geladen, da alle WHEN-Klauseln fehlerhaft waren.
    0 Zeilen nicht geladen, da alle Felder NULL waren.
    Zugewiesener Bereich für Bind-Array: 232576 Bytes (64 Zeilen)
    Byte in Lese-Puffer: 1048576
    Gesamtzahl der übersprungenen logischen Datensätze: 0
    Gesamtzahl der gelesenen logischen Datensätze: 100
    Gesamtzahl der abgelehnten logischen Datensätze: 0
    Gesamtzahl der zurückgewiesenen logischen Datensätze: 0
    Lauf begonnen am Fr Mai 28 15:16:43 2010
    Lauf beendet am Fr Mai 28 15:16:43 2010
    Abgelaufene Zeit: 00:00:00.19
    CPU-Zeit: 00:00:00.01
    Edited by: user9338988 on 28.05.2010 06:21

    sorry, wrong forum. I opened the thread in forum "Export/Import/SQL-Loader & External Tables"

  • Loading spatial data by sql *loader

    hi there
    i have a load_kat_opcina.ctl file from which i should load spatial data into my 10g db table.
    load_data.ctl file is as shown below:
    LOAD DATA
    INFILE *
    REPLACE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE KAT_OPCINA
    FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    (KO_MBR          NULLIF KO_MBR=BLANKS,
    KO_SIFRA          NULLIF KO_SIFRA=BLANKS,
    KO_NAZIV          NULLIF KO_NAZIV=BLANKS,
    KO_ID          NULLIF KO_ID=BLANKS,
    ID          NULLIF ID=BLANKS,
    is_null1 FILLER CHAR,
    POVRSINA     COLUMN OBJECT NULLIF is_null1='E'
    (     sdo_gtype INTEGER EXTERNAL,
         sdo_srid INTEGER EXTERNAL NULLIF POVRSINA.sdo_srid=BLANKS,
         SDO_POINT COLUMN OBJECT NULLIF is_null1='C'
              ( X INTEGER EXTERNAL,
              Y INTEGER EXTERNAL,
              Z INTEGER EXTERNAL NULLIF POVRSINA.SDO_POINT.Z=BLANKS),
         SDO_ELEM_INFO VARRAY terminated by ';' NULLIF is_null1='P'
         (SDO_ORDINATES INTEGER EXTERNAL),
         SDO_ORDINATES VARRAY terminated by ':' NULLIF is_null1='P'
         (SDO_ORDINATES INTEGER EXTERNAL)
    BEGINDATA
    0|426|MARKU[EVEC|314717|6789094|
    0|3131|VURNOVEC|16605787|6789097|
    #C|2003|||||1|1005|3|1|2|1|169|......|5589490440|5082192250:
    0|3034|\UR\EKOVEC|16225011|6789100|
    0|35|^EHI|12297784|6789190|
    #C|2003|||||1|1005|2|1|2|1|239|....|5574944600|5064714553:
    0|221|ODRANSKI OBRE@|12441649|6789193|
    0|353|TRPUCI|14071974|6789199|
    i have deleted most of data here due to space savings.
    i call sql *loader from winxp command prompt as follows:
    SQLLDR CONTROL=C:\temp\load_kat_opcina.ctl, USERID=username/pswrd@sid, LOG=logfile.log,BAD==baz.bad, DISCARD=DISCARD=toss.dsc
    after executing command, table 'kat_opcina' is not filled with data from this .ctl file.
    the following is the content of the log file:
    SQL*Loader: Release 10.2.0.1.0 - Production on Sri Svi 31 14:20:28 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Control File: C:\TEMP\load_kat_opcina.ctl
    Data File: C:\TEMP\load_kat_opcina.ctl
    Bad File: C:\TEMP\baz.bad
    Discard File: C:\TEMP\toss.dsc
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array: 64 rows, maximum of 256000 bytes
    Continuation: 1:1 = 0X23(character '#'), in next physical record
    Path used: Conventional
    Table KAT_OPCINA, loaded from every logical record.
    Insert option in effect for this table: REPLACE
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    KO_MBR FIRST * | O(") CHARACTER
    NULL if KO_MBR = BLANKS
    KO_SIFRA NEXT * | O(") CHARACTER
    NULL if KO_SIFRA = BLANKS
    KO_NAZIV NEXT * | O(") CHARACTER
    NULL if KO_NAZIV = BLANKS
    KO_ID NEXT * | O(") CHARACTER
    NULL if KO_ID = BLANKS
    ID NEXT * | O(") CHARACTER
    NULL if ID = BLANKS
    IS_NULL1 NEXT * | O(") CHARACTER
    (FILLER FIELD)
    POVRSINA DERIVED * COLUMN OBJECT
    NULL if IS_NULL1 = 0X45(character 'E')
    *** Fields in POVRSINA
    SDO_GTYPE NEXT * | O(") CHARACTER
    SDO_SRID NEXT * | O(") CHARACTER
    NULL if POVRSINA.SDO_SRID = BLANKS
    SDO_POINT DERIVED * COLUMN OBJECT
    NULL if IS_NULL1 = 0X43(character 'C')
    *** Fields in POVRSINA.SDO_POINT
    X NEXT * | O(") CHARACTER
    Y NEXT * | O(") CHARACTER
    Z NEXT * | O(") CHARACTER
    NULL if POVRSINA.SDO_POINT.Z = BLANKS
    *** End of fields in POVRSINA.SDO_POINT
    SDO_ELEM_INFO DERIVED * ; VARRAY
    NULL if IS_NULL1 = 0X50(character 'P')
    *** Fields in POVRSINA.SDO_ELEM_INFO
    SDO_ORDINATES FIRST * | O(") CHARACTER
    *** End of fields in POVRSINA.SDO_ELEM_INFO
    SDO_ORDINATES DERIVED * : VARRAY
    NULL if IS_NULL1 = 0X50(character 'P')
    *** Fields in POVRSINA.SDO_ORDINATES
    SDO_ORDINATES FIRST * | O(") CHARACTER
    *** End of fields in POVRSINA.SDO_ORDINATES
    *** End of fields in POVRSINA
    Record 1: Rejected - Error on table KAT_OPCINA.
    ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
    ORA-13365: layer SRID does not match geometry SRID
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
    Record 2: Rejected - Error on table KAT_OPCINA.
    ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
    ORA-13365: layer SRID does not match geometry SRID
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
    Record 33: Rejected - Error on table KAT_OPCINA.
    ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
    ORA-13365: layer SRID does not match geometry SRID
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
    SQL*Loader-510: Physical record in data file (C:\TEMP\load_kat_opcina.ctl) is longer than the maximum(65536)
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    Specify SKIP=33 when continuing the load.
    Table KAT_OPCINA:
    0 Rows successfully loaded.
    33 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 215168 bytes(64 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 33
    Total logical records rejected: 33
    Total logical records discarded: 0
    Run began on Sri Svi 31 14:20:28 2006
    Run ended on Sri Svi 31 14:20:32 2006
    Elapsed time was: 00:00:04.51
    CPU time was: 00:00:00.26
    error messages are all the same for record numbers: 3-32.
    so, i'd like to know what am i doing wrong that table cannot be filled with data using sql *loader.
    also, would like to know if there's another way of loading data into table from .ctl file (using maybe some other tool)
    appreciate any help
    thanks

    Hi,
    You receive:
    ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
    ORA-13365: layer SRID does not match geometry SRID
    Have you created spatial index for table PORVSINA? I guess that yes, and you have created it with non NULL SRID value? So, ORA-13365 means that you are trying to insert spatial data with SRID that is not the same as SRID defined in spatial index.
    Check index SRID and your data SRID, they must be the same. Or, you can disable spatial index.
    Andrejus

  • Skipping fields in SQL*LOADER data file

    I have a data file that has more fields than the target table does. How can I write a SQL*LOADER control file to skip some fields in the middle of the text line?
    null

    If you don't want to define input fields by position, the simplest way I think is to use FILLER fields.
    Quoted from SQL*Loader doc:
    "Specifying Filler Fields
    Filler fields have names but they are not loaded into the table. However, filler fields can be used as arguments to init_specs (for example, NULLIF and DEFAULTIF) as well as to directives (for example, SID, OID, REF, BFILE). Also, filler fields can occur anyplace in the data file. They can be inside of the field list for an object or inside the definition of a VARRAY.
    See SQL*Loader DDL Behavior and Restrictions for more information on filler fields and their use.
    A sample filler field specification looks as follows:
    field_1_count FILLER char,
    Ex:
    Regards,
    Zoltan

  • How can I load a .xlsx File into a SQL Server Table using a Foreach Loop Container in SSIS?

    I know I've REALLY struggled with this before. I just don't understand why this has to be soooooo difficult.
    I can very easily do a straight Data Pump of a .xlsX File into a SQL Server Table using a normal Excel Connection and a normal Excel Source...simply converting Unicode to DT_STR and then using an OLE DB Destination of the SQL Server Table.
    If I want to make the SSIS Package a little more flexible by allowing multiple .xlsX spreadsheets to be pumped in by using a Foreach Loop Container, the whole SSIS Package seems to go to hell in a hand basket. I simply do the following...
    Put the Data Flow Task within the Foreach Loop Container
    Add the Variable Mapping Variable User::FilePath that I defined as a Variable and a string within the FOreach Loop Container
    I change the Excel Connection and its Expression to be ExcelFilePath ==> @[User::FilePath]
    I then try and change the Excel Source and its Data Access Mode to Table Name or view name variable and provide the Variable Name User::FilePath
    And that's when I run into trouble...
    Exception from HRESULT: 0xC02020E8
    Error at Data Flow Task [Excel Source [56]]:SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occured. Error code: 0x80004005.
    Error at Data Flow Task [Excel Source [56]]: Opening a rowset for "...(the EXACT Path and .xlsx File Name)...". Check that the object exists in the database. (And I know it's there!!!)
    I don't understand by adding a Foreach Loop Container to try and make this as efficient as possible has caused such an error unless I'm overlooking something. I have even tried delaying my validations and that doesn't seem to help.
    I have looked hard in Google and even YouTube to try and find a solution for this but for the life of me I cannot seem to find anything on pumping a .xlsX file into SQL Server using a Foreach Loop Container.
    Can ANYONE please help me out here? I'm at the end of my rope trying to get this to work. I think the last time I was in this quandry, trying to pump a .xlsX File into a SQL Server Table using a Foreach Loop Container in SSIS, I actually wrote a C# Script
    to write the contents of the .xlsX File into a .csv File and then Actually used the .csv File to pump the data into a SQL Server Table.
    Thanks for your review and am hoping and praying for a reply and solution.

    Hi ITBobbyP,
    If I understand correctly, you want to load data from multiple sheets in an .xlsx file into a SQL Server table.
    If in this scenario, please refer to the following tips:
    The Foreach Loop container should be configured as shown below:
    Enumerator: Foreach ADO.NET Schema Rowset Enumerator
    Connection String: The OLE DB Connection String for the excel file.
    Schema: Tables.
    In the Variable Mapping, map the variable to Sheet_Name, and change the Index from 0 to 2.
    The connection string for Excel Connection Manager is the original one, we needn’t make any change.
    Change Table Name or View name to the variable Sheet_Name.
    If you want to load data from multiple sheets in multiple .xlsx files into a SQL Server table, please refer to following thread:
    http://stackoverflow.com/questions/7411741/how-to-loop-through-excel-files-and-load-them-into-a-database-using-ssis-package
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • How to load a default value in to a column when using sql loader

    Im trying to load from a flat file using sql loader.
    for 1 column i need to update using a default value
    how to go about this?

    Hi!
    try this code --
    LOAD DATA
       INFILE 'sample.dat'
       REPLACE
       INTO TABLE emp
       empno   POSITION(01:04) INTEGER EXTERNAL NULLIF empno=BLANKS,
       ename   POSITION(06:15)  CHAR,
       job         POSITION(17:25)  CHAR,
       mgr       POSITION(27:30)  INTEGER EXTERNAL NULLIF mgr=BLANKS,
       sal        POSITION(32:39)  DECIMAL EXTERNAL NULLIF sal=BLANKS,
       comm   POSITION(41:48)  DECIMAL EXTERNAL DEFAULTIF comm = 100,
       deptno  POSITION(50:51)  INTEGER EXTERNAL NULLIF deptno=BLANKS,
       hiredate POSITION(52:62) CONSTANT SYSDATE
      )-hope this will solve ur purpose.
    Regards.
    Satyaki De.

  • How can we tell if SQL*Loader is working on a TABLE?

    We have a process that requires comparing batches with LDAP information. Instead of using an LDAP lookup tool, we get a nightly directory file, and import the two COLUMNs we want via SQL*Loader (REPLACE) into an IOT. Out of three cases, two just check the first COLUMN, and the third needs the second COLUMN as well.
    We did not think of using External TABLEs, because we cannot store files on the DB server itself.
    The question arises, what to do while the file is being imported. The file is just under 300M, so it takes a minute or so to replace all the data. We found SQL*Loader waits until a transaction is finished before starting, but a query against the TABLE only waits while it is actually importing the data. At the beginning of SQL*Loader's process, however, a query against the TABLE returns no rows.
    The solution we are trying right now is, to have the process that starts SQL*Loader flip a flag in another TABLE denoting that it is unavailable. When it is done, it flips it back, and notes the date. Then, the process that queries the information, exits if the flag is currently 'N'.
    The problem, is, what if SQL*Loader starts inbetween the check of the flag, and the query against the TABLE. How do we guarantee that it is still not being imported.
    I can think of three solutions:
    1) LOCK the ldap information TABLE before checking the flag.
    2) LOCK the record that the process starting SQL*Loader flips.
    3) Add a clause to the query against the TABLE checks that there are records in the TABLE (AND EXISTS(SELECT * FROM ldap_information).
    The problem with 3) is that the process has already tagged the batches (via a COLUMN). It could, technically reset them afterwards, but that seems a bit backwards.

    Just out of curiosity, are you aware that Oracle supplies a DBMS_LDAP package for pulling information from LDAP sources? It would obviously be relatively easy to have a single transaction that deletes the existing data, loads the new data via DBMS_LDAP, and commits, which would get around the problem you're having with SQL*Loader truncating the table.
    You could also have SQL*Loader load the data into a staging table and then have a second process either MERGE the changes from the staging table into the real table (again in a transactionally consistent manner) or just delete and insert the data.
    Justin

  • Loading two tables at same time with SQL Loader

    I have two tables I would like to populate from a file C:\my_data_file.txt.
    Many of the columns I am loading into both tables but there are a handful of columns I do not want. The first column I do not want for either table. My problem is how I can direct SQL Loader to go back to the first column and skip over it. I had tried using POSITION(1) and FILLER for the first column while loading the second table but I got THE following error message:
    SQL*Loader-350: Syntax error at line 65
    Expecting "," or ")" found keyword Filler
    col_a Poistion(1) FILLER INTEGER EXTERNALMy control file looks like the following:
    LOAD DATA
    INFILE 'C:\my_data_file.txt'
    BADFILE 'C:\my_data_file.txt'
    DISCARDFILE 'C:\my_data_file.txt'
    TRUNCATE INTO TABLE table_one
    WHEN (specific conditions)
    FIELDS TERMINATED BY ' '
    TRAILING NULLCOLS
    col_a FILLER INTEGER EXTERNAL,
    col_b INTEGER EXTERNAL,
    col_g FILLER CHAR,
    col_h CHAR,
    col_date DATE "yyyy-mm-dd"
    INTO TABLE table_two
    WHEN (specific conditions)
    FIELDS TERMINATED BY ' '
    TRAILING NULLCOLS
    col_a POSITION(1) FILLER INTEGER EXTERNAL,
    col_b INTEGER EXTERNAL,
    col_g FILLER CHAR,
    col_h CHAR,
    col_date DATE "yyyy-mm-dd"
    )

    Try adapting this for your scenario.
    tables for the test
    create table test1 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
    create table test2 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
    control file
    LOAD DATA
    INFILE "test.txt"
    INTO TABLE user.test1 TRUNCATE
    WHEN RECID = '1'
    FIELDS TERMINATED BY ' '
    recid filler integer external,
    fld1 char,
    fld2 integer external,
    fld3 char
    INTO TABLE user.test2 TRUNCATE
    WHEN RECID <> '1'
    FIELDS TERMINATED BY ' '
    recid filler position(1) integer external,
    fld1 char,
    fld2 integer external,
    fld3 char
    data for loading [text.txt]
    1 AAAAA 11111 IIIII
    2 BBBBB 22222 JJJJJ
    1 CCCCC 33333 KKKKK
    2 DDDDD 44444 LLLLL
    1 EEEEE 55555 MMMMM
    2 FFFFF 66666 NNNNN
    1 GGGGG 77777 OOOOO
    2 HHHHH 88888 PPPPP
    HTH
    RK

  • How can I load my data faster?  Is there a SQL solution instead of PL/SQL?

    11.2.0.2
    Solaris 10 sparc
    I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
    My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
    My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
    Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
    DECLARE
       v_inv_id        invoice.invoice_id%TYPE;
       v_inv_addr_id    invoice_address.invoice_address_id%TYPE;
       errString        invoice_errors.sqlerrmsg%TYPE;
       v_guid          VARCHAR2 (128);
       v_str           VARCHAR2 (256);
       v_err_loc       NUMBER;
       v_count         NUMBER := 0;
       l_start_time    NUMBER;
       TYPE rec IS RECORD
          BILLING_TYPE             VARCHAR2 (256),
          CURRENCY                 VARCHAR2 (256),
          BILLING_DOCUMENT         VARCHAR2 (256),
          DROP_SHIP_IND            VARCHAR2 (256),
          TO_PO_NUMBER        VARCHAR2 (256),
          TO_PURCHASE_ORDER   VARCHAR2 (256),
          DUE_DATE                 DATE,
          BILL_DATE                DATE,
          TAX_AMT                  VARCHAR2 (256),
          PAYER_CUSTOMER           VARCHAR2 (256),
          TO_ACCT_NO          VARCHAR2 (256),
          BILL_TO_ACCT_NO          VARCHAR2 (256),
          NET_AMOUNT               VARCHAR2 (256),
          NET_AMOUNT_CURRENCY      VARCHAR2 (256),
          ORDER_DT             DATE,
          TO_CUSTOMER         VARCHAR2 (256),
          TO_NAME             VARCHAR2 (256),
          FRANCHISES       VARCHAR2 (4000),
          UPDT_DT                  DATE
       TYPE tab IS TABLE OF rec
                      INDEX BY BINARY_INTEGER;
       pltab           tab;
       CURSOR c
       IS
          SELECT   billing_type,
                   currency,
                   billing_document,
                   drop_ship_ind,
                   to_po_number,
                   to_purchase_order,
                   due_date,
                   bill_date,
                   tax_amt,
                   payer_customer,
                   to_acct_no,
                   bill_to_acct_no,
                   net_amount,
                   net_amount_currency,
                   order_dt,
                   to_customer,
                   to_name,
                   franchises,
                   updt_dt
            FROM   BACKFILL_INVOICES;
    BEGIN
       l_start_time := DBMS_UTILITY.get_time;
       OPEN c;
       LOOP
          FETCH c
          BULK COLLECT INTO pltab
          LIMIT 1000;
          v_err_loc := 1;
          FOR i IN 1 .. pltab.COUNT
          LOOP
             BEGIN
                v_inv_id :=  SEQ_INVOICE_ID.NEXTVAL;
                v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
                v_str := str_parser (pltab (i).FRANCHISES); --function to string parse  - this could be done in advance, yes.
                v_err_loc := 2;
                v_count := v_count + 1;
                INSERT INTO    invoice nologging
                     VALUES   (v_inv_id,
                               pltab (i).BILL_DATE,
                               v_guid,
                               '111111',
                               'NONE',
                               TO_TIMESTAMP (pltab (i).BILL_DATE),
                               TO_TIMESTAMP (pltab (i).UPDT_DT),
                               'READ',
                               'PAPER',
                               pltab (i).payer_customer,
                               v_str,
                               '111111');
                v_err_loc := 3;
                INSERT INTO    invoice_header nologging
                     VALUES   (v_inv_id,
                               TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
                               NULL,
                               pltab (i).BILL_DATE,                 --invoice_date
                               pltab (i).TO_PO_NUMBER,
                               NULL,
                               pltab (i).net_amount,
                               NULL,
                               pltab (i).tax_amt,
                               NULL,
                               NULL,
                               pltab (i).due_date,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               TO_TIMESTAMP (SYSDATE),
                               TO_TIMESTAMP (SYSDATE),
                               PLTAB (I).NET_AMOUNT_CURRENCY,
                               (SELECT   i.bc_value
                                  FROM   invsvc_owner.billing_codes i
                                 WHERE   i.bc_name = PLTAB (I).BILLING_TYPE),
                               PLTAB (I).BILL_DATE);
                v_err_loc := 4;
                INSERT INTO    invoice_address nologging
                     VALUES   (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH INITIAL',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).to_acct_no,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 5;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).TO_ACCT_NO,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 6;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH2',
                               pltab (i).BILL_DATE,
                               NULL,
                               pltab (i).TO_CUSTOMER,
                               pltab (i).to_name,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 7;
                INSERT INTO    invoice_address nologging
                     VALUES   ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
                               v_inv_id,
                               'BLAH3',
                               pltab (i).BILL_DATE,
                               NULL,
                               'SOME PROPRIETARY DATA',
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               NULL,
                               SYSTIMESTAMP,
                               NULL);
                v_err_loc := 8;
                INSERT
                  INTO    invoice_event nologging (id,
                                                             eid,
                                                             root_eid,
                                                             invoice_number,
                                                             event_type,
                                                             event_email_address,
                                                             event_ts)
                VALUES   ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
                          '111111',
                          '222222',
                          TRIM (LEADING 0 FROM pltab (i).billing_document),
                          'READ',
                          'some_user@some_company.com',
                          SYSTIMESTAMP);
                v_err_loc := 9;
                INSERT INTO   backfill_invoice_mapping
                     VALUES   (v_inv_id,
                               v_guid,
                               pltab (i).billing_document,
                               pltab (i).payer_customer,
                               pltab (i).net_amount);
                IF v_count = 10000
                THEN
                   COMMIT;              
                END IF;
             EXCEPTION
                WHEN OTHERS
                THEN
                   errString := SQLERRM;
                   INSERT INTO   backfill_invoice_errors
                        VALUES   (
                                    pltab (i).billing_document,
                                    pltab (i).payer_customer,
                                    errString || ' ' || v_err_loc
                   COMMIT;
             END;
          END LOOP;
          v_err_loc := 10;
          INSERT INTO   backfill_invoice_timing
               VALUES   (
                           ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
                                  2)
                           || ' seconds.',
                           (SELECT   COUNT (1)
                              FROM   backfill_invoice_mapping),
                           (SELECT   COUNT (1)
                              FROM   backfill_invoice_errors),
                           SYSDATE
          COMMIT;
          EXIT WHEN c%NOTFOUND;
       END LOOP;
       COMMIT;
    EXCEPTION
       WHEN OTHERS
       THEN
          errString := SQLERRM;
          INSERT INTO   backfill_invoice_errors
               VALUES   (NULL, NULL, errString || ' ' || v_err_loc);
          COMMIT;
    END;

    Hello
    You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
    INSERT ALL
         INTO      invoice nologging
                    VALUES   (     SEQ_INVOICE_ID.NEXTVAL,
                                   BILL_DATE,
                                    my_guid,
                                    '111111',
                                    'NONE',
                                    CAST(BILL_DATE AS TIMESTAMP),
                                    CAST(UPDT_DT AS TIMESTAMP),
                                    'READ',
                                    'PAPER',
                                    payer_customer,
                                    parsed_francises,
                                    '111111'
         INTO      invoice_header
              VALUES   (      SEQ_INVOICE_ID.CURRVAL,
                        TRIM (LEADING 0 FROM billing_document), --invoice_num
                        NULL,
                        BILL_DATE,                 --invoice_date
                        TO_PO_NUMBER,
                        NULL,
                        net_amount,
                        NULL,
                        tax_amt,
                        NULL,
                        NULL,
                        due_date,
                        NULL,
                        NULL,
                        NULL,
                        NULL,
                        NULL,
                        SYSTIMESTAMP,
                        SYSTIMESTAMP,
                        NET_AMOUNT_CURRENCY,
                        bc_value,
                        BILL_DATE)
         SELECT 
         src.billing_type,
              src.currency,
              src.billing_document,
              src.drop_ship_ind,
              src.to_po_number,
              src.to_purchase_order,
              src.due_date,
              src.bill_date,
              src.tax_amt,
              src.payer_customer,
              src.to_acct_no,
              src.bill_to_acct_no,
              src.net_amount,
              src.net_amount_currency,
              src.order_dt,
              src.to_customer,
              src.to_name,
              src.franchises,
              src.updt_dt,
              str_parser (src.FRANCHISES) parsed_franchises,
              'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
              i.bc_value
            FROM        BACKFILL_INVOICES src,
                 invsvc_owner.billing_codes i
         WHERE   i.bc_name = src.BILLING_TYPE;Some things to note
    1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
    2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
    3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
    HTH
    David
    Edited by: Bravid on Oct 13, 2011 4:35 PM

Maybe you are looking for

  • Chart Legend and Title - Can they be Different?

    Hi, Pie chart I am displaying has titles as big names like Oracle Applications Oracle content management etc So the space occupied is more. Is there a way to display the titles as abbreviations and the legend as full name? Eg: Title OA OCM Legend OA

  • Create playlists from Shared Music

    I suppose this is a feature request unless there is a solution I'm not aware of. I have an admin account with a Music Library that is shared. I just want to allow other users on the same machine to make playlists from my Shared music, but not have wr

  • Performace of httpservice

    I have recently written a flex app that downloads some xml data from a URL using HTTPService to call a php script and return xml. This works fine as it is a v. simple app, but what if there were many requests to this URL. A friend told me to use amfp

  • Indesign slow on powerful PC?

    Hi I just upgraded my PC. On my old PC indd was pretty fast. If I had to work with .ai's in high-res display mode in indd, it was laggy but not bad. However now, working with linked .ai's in high-res mode is down right painful. I'm using CS5.5. OLD P

  • Problems with sound when using adobe flash player

    It seems that we have some issues with the sounds of the uploaded videos. I have uploaded some videos directly on our website http://yoananin.com/ but many people complained that  they can not hear the sound  Can anyone help?