PL/SQL Bulk Loading

Hello,
I have one question regarding bulk loading. I did lot of bulk loading.
But my requirement is to call function which will do some DML operation and give ref key so that i can insert to fact table.
Because i can't use DML function in select statement. (which will give error). otherway is using autonomous transaction. which i tried working but performance is very slow.
How to call this function inside bulk loading process.
Help !!
xx_f is function which is using autonmous transction,
See my sample code
declare
cursor c1 is select a,b,c from xx;
type l_a is table of xx.a%type;
type l_b is table of xx.b%type;
type l_c is table of xx.c%type;
v_a l_a;
v_b l_b;
v_c l_c;
begin
open c1;
loop
fetch c1 bulk collect into v_a,v_b,v_c limit 1000;
exit when c1%notfound;
begin
forall i in 1..v_a.count
insert into xxyy
(a,b,c) values (xx_f(v_a(i),xx_f(v_b(i),xx_f(v_c(i));
commit;
end bulkload;
end loop;
close c1;
end;
I just want to call xx_f function without autonoumous transaction.
but with bulk loading. Please let me if you need more details
Thanks
yreddyr

Can you show the code for xx_f? Does it do DML, or just transformations on the columns?
Depending on what it does, an alternative could be something like:
DECLARE
   CURSOR c1 IS
      SELECT xx_f(a), xx_f(b), xx_f(c) FROM xx;
   TYPE l_a IS TABLE OF whatever xx_f returns;
   TYPE l_b IS TABLE OF whatever xx_f returns;
   TYPE l_c IS TABLE OF whatever xx_f returns;
   v_a l_a;
   v_b l_b;
   v_c l_c;
BEGIN
   OPEN c1;
   LOOP
      FETCH c1 BULK COLLECT INTO v_a, v_b, v_c LIMIT 1000;
      BEGIN
         FORALL i IN 1..v_a.COUNT
            INSERT INTO xxyy (a, b, c)
            VALUES (v_a(i), v_b(i), v_c(i));
      END;
      EXIT WHEN c1%NOTFOUND;
   END LOOP;
   CLOSE c1;
END;John

Similar Messages

  • Bulk loading BLOBs using PL/SQL - is it possible?

    Hi -
    Does anyone have a good reference article or example of how I can bulk load BLOBs (videos, images, audio, office docs/pdf) into the database using PL/SQL?
    Every example I've ever seen in PL/SQL for loading BLOBs does a commit; after each file loaded ... which doesn't seem very scalable.
    Can we pass in an array of BLOBs from the application, into PL/SQL and loop through that array and then issue a commit after the loop terminates?
    Any advice or help is appreciated. Thanks
    LJ

    It is easy enough to modify the example to commit every N files. If you are loading large amounts of media, I think that you will find that the time to load the media is far greater than the time spent in SQL statements doing inserts or retrieves. Thus, I would not expect to see any significant benefit to changing the example to use PL/SQL collection types in order to do bulk row operations.
    If your goal is high performance bulk load of binary content then I would suggest that you look to use Sqlldr. A PL/SQL program loading from BFILEs is limited to loading files that are accessible from the database server file system. Sqlldr can do this but it can also load data from a remote client. Sqlldr has parameters to control batching of operations.
    See section 7.3 of the Oracle Multimedia DICOM Developer's Guide for the example Loading DICOM Content Using the SQL*Loader Utility. You will need to adapt this example to the other Multimedia objects (ORDImage, ORDAudio .. etc) but the basic concepts are the same.
    Once the binary content is loaded into the database, you will need a to write a program to loop over the new content and initialize the Multimedia objects (extract attributes). The example in 7.3 contains a sample program that does this for the ORDDicom object.

  • Bulk Loading using remote sql statement execution

    Well, i have a different scenario. I want to bulk load the tables like we do in MySQL with LOAD LOCAL DATA sql command.
    I have a file populated with data, what sql statement would bulk load the data into specified table using that file?
    Adnan Memon

    In Oracle, you would either use the SQL*Loader utility to load data from a flat file or you would create an external table (9i and later) that loads the flat file.
    A quick example of the external table approach
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • SSRS 2005 report: Cannot bulk load Operating system error code 5(Access is denied.)

    I built a SSRS 2005 report, which calls a stored proc on SQL Server 2005. The proc contains following code:
    CREATE TABLE #promo (promo VARCHAR(1000))
    BULK
    INSERT #promo
    FROM '\\aseposretail\c$\nz\promo_names.txt'
    WITH
    --FIELDTERMINATOR = '',
    ROWTERMINATOR = '\n'
    SELECT * from #promo
    It's ok when I manually execute the proc in SSMS.
    When I try to run the report from BIDS I got following error:
    *Cannot bulk load because the file "\aseposretail\c$\nz\promo_names.txt" could not be opened. Operating system error code 5(Access is denied.).*
    Note: I have gooled a bit and see many questions on this but they are not relevant because I CAN run the code no problem in SSMS. It's the SSRS having the issue. I know little about the security of SSRS.

    I'm having the same type of issue.  I can bulk load the same file into the same table on the same server using the same login on one workstation, but not on another.  I get this error:
    Msg 4861, Level 16, State 1, Line 1
    Cannot bulk load because the file "\\xxx\abc.txt" could not be opened. Operating system error code 5(Access is denied.).
    I've checked SQL client versions and they are the same, I've also set the client connection to TCP/IP only in the SQL Server Configuration Manager.  Still this one workstation is getting the error.  Since the same login is being used on both workstations and it works on one  but not the other, the issue is not a permissions issue.  I can also have another user login into the bad workstation and have the bulk load fail, but when they log into their regular workstation it works fine.  Any ideas on what the client configuration issue is?  These are the version numbers for Management Studio:
    Microsoft SQL Server Management Studio 9.00.3042.00
    Microsoft Analysis Services Client Tools 2005.090.3042.00
    Microsoft Data Access Components (MDAC) 2000.085.1132.00 (xpsp.080413-0852)
    Microsoft MSXML 2.6 3.0 5.0 6.0
    Microsoft Internet Explorer 6.0.2900.5512
    Microsoft .NET Framework 2.0.50727.1433
    Operating System 5.1.2600
    Thanks,
    MWise

  • Bulk Load option doesn't work

    Hi Experts,
    I am trying to load data to HFM using Bulk load option but it doesnt work. When I Change the option to SQL insert, the loading is successful. The logs say that the temp file is missing. But when I go to the lspecified location , I see the control file and the tmp file. What am I missing to have bulk load working?Here's the log entry.
    2009-08-19-18:48:29
    User ID...........     kannan
    Location..........     KTEST
    Source File.......     \\Hyuisprd\Applications\FDM\CRHDATALD1\Inbox\OMG\HFM July2009.txt
    Processing Codes:
    BLANK............. Line is blank or empty.
    ESD............... Excluded String Detected, SKIP Field value was found.
    NN................ Non-Numeric, Amount field contains non numeric characters.
    RFM............... Required Field Missing.
    TC................ Type Conversion, Amount field could be converted to a number.
    ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
    Create Output File Start: [2009-08-19-18:48:29]
    [TC] - [Amount=NN]     Batch Month File Created: 07/2009
    [TC] - [Amount=NN]     Date File Created: 8/6/2009
    [TC] - [Amount=NN]     Time File Created: 08:19:06
    [Blank] -      
    Excluded Record Count.............. 3
    Blank Record Count................. 1
    Total Records Bypassed............. 4
    Valid Records...................... 106093
    Total Records Processed............ 106097
    Begin Oracle (SQL-Loader) Process (106093): [2009-08-19-18:48:41]
    [RDMS Bulk Load Error Begin]
         Message:      (53) - File not found
         See Bulk Load File:      C:\DOCUME~1\fdmuser\LOCALS~1\Temp\tWkannan30327607466.tmp
    [RDMS Bulk Load Error End]
    Thanks
    Kannan.

    Hi Experts,
    I am facing the data import error while importing data from .csv file to FDM-HFM application.
    2011-08-29 16:19:56
    User ID...........     admin
    Location..........     ALBA
    Source File.......     C:\u10\epm\DEV\epm_home\EPMSystem11R1\products\FinancialDataQuality\FDMApplication\BMHCFDMHFM\Inbox\ALBA\BMHC_Alba_Dec_2011.csv
    Processing Codes:
    BLANK............. Line is blank or empty.
    ESD............... Excluded String Detected, SKIP Field value was found.
    NN................ Non-Numeric, Amount field contains non numeric characters.
    RFM............... Required Field Missing.
    TC................ Type Conversion, Amount field could be converted to a number.
    ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
    Create Output File Start: [2011-08-29 16:19:56]
    [ESD] ( ) Inter Co,Cash and bank balances,A113000,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],1
    [ESD] ( ) Inter Co,"Trade receivable, prepayments and other assets",HFM128101,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],35
    [ESD] ( ) Inter Co,Inventories ,HFM170003,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],69
    [ESD] ( ) Inter Co,Financial assets carried at fair value through P&L,HFM241001,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],103
    [Blank] -      
    Excluded Record Count..............4
    Blank Record Count.................1
    Total Records Bypassed.............5
    Valid Records......................0
    Total Records Processed............5
    Begin SQL Insert Load Process (0): [2011-08-29 16:19:56]
    Processing Complete... [2011-08-29 16:19:56]
    Please help me solve the issue.
    Regards,
    Sudhir Sinha

  • Bulk Insert Task Cannot bulk load because the file could not be opened.operating system error error code 3(The system cannot find the path specified.)

    Following error i am getting after i chnaged the Path in Config File from
    \\vs01\d$\\Deployment\Files\temp.txt
    to
    C:\Deployment\Files\temp.txt
    [Bulk Insert Task] Error: An error occurred with the following error message: "Cannot bulk load because the file "C:\Deployment\Files\temp.txt" could not be opened. Operating system error code 3(The system cannot find the path specified.).". 

    I think i know whats going on. The Bulk Insert task runs by executing sql command (bulk insert) internally from the target sql server to load the file. This means that the SQL Server Agent of the target sql server should have permissions on the file you trying to load. This also means that you need to use UNC path instead to specify the file path (if the target server in on different machine)
    Also from BOL (see section Usage Considerations - last bullet point)
    http://msdn.microsoft.com/en-us/library/ms141239.aspx
    * Only members of the sysadmin fixed server role can run a package that contains a Bulk Insert task.
    Make sure you take care of this as well.
    HTH
    ~Mukti
    Mukti

  • CBO madness after bulk loading

    This is an extension of my other recent posts, but I felt it deserved it's own space.
    I have a table of telephone call records, one row for each telephone call made or received by a customer. Our production table has a 10-field PK that I want to destroy. In my development version, the PK for this table is a compound key on LOC, CUST_NO, YEAR, MONTH, and SEQ_NO. LOC is a char(3), the rest are numbers.
    After a bulk load into a new partition of this table, a query with these 5 fields in the where clause chooses a second index. That second index includes LOC, YEAR, MONTH, and two other fields not in the PK nor in the query. The production instance does the same thing, and I was certain that having the 5-field PK would be the magic bullet.
    Oracle SQL Developer's autotrace shows a "Filter Predicates" on CUST_NO and SEQ_NO, and then the indexed range scan on the other 3 fields in the second index. Still noteworthy is that query on just LOC, CUST_NO, YEAR and MONTH does use the PK.
    Here are the steps I've taken to test this:
    1. Truncate the partition in question
    2. Drop old PK constraint/index
    3. Create new PK constraint/index
    4. Gather table stats with cascade=>TRUE
    5. Bulk load data (in this case, 1.96 million rows) into empty partition
    6. autotrace select query
    7. Write to dizwell in tears
    This table also has two other partitions for past two cycles, each with around 30 million row.
    Yes, gathering table stats again makes things behave as expected, but that takes a fair bit of time. For the meantime we've put an index hint in the application query that was suffering the most.

    First, the CBO doesn't actually choose a full table
    scan, it chooses to use a second index.Depending on the query, of course. If the CBO thinks a partition is empty, I would suspect that it would find it most efficient to scan the smallest index, and the second index, with fewer columns, would be expected to be smaller. If it thinks they are equally costly, I believe it will use the one that was created first, though I wouldn't want to depend on that sort of failure.
    I've lowered the sample percentage to 10% and set
    CASCADE to FALSE and it still take 45 minutes in
    production. The staging table was something I was
    considering.
    Are statistics included in partition exchange? I've
    asked that question before but never saw an answer.Yes, partition-level statistics will be included. Table-level statistics will be automatically adjusted. From the SQL Reference
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131250
    "All statistics of the table and partition are exchanged, including table, column, index statistics, and histograms. Oracle Database recalculates the aggregate statistics of the table receiving the new partition."
    You could also just explicitly set table-level statistics, assuming you don't need too many histograms, possibly gathering statistics for real later on.
    Justin

  • How to improve performance for Azure Table Storage bulk loads

    Hello all,
    Would appreciate your help as we are facing a challenge.
    We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
    We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
    We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
    Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
    Kindly, note that we shouldn't be using SQL/Azure SQL for this.
    I would really appreciate your help.
    Thanks

    I'd think you're just pooling the parallel connections to Azure, if you do it on one system.  You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
    You could speed it up by moving the data file to the cloud and process it with a Cloud worker role.  That way you'd be in the datacenter (which is a much faster, more optimized network.)
    Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
    Darin R.

  • Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).

    Hi,
    I have a file where fields are wrapped with ".
    =========== file sample
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    ==========
    I am having a .net method to remove the wrap characters and write out a file without wrap characters.
    ======================
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    ======================
    the .net code is here.
    ========================================
    public static string RemoveCharacter(string sFileName, char cRemoveChar)
                object objLock = new object();
                //VirtualStream objInputStream = null;
                //VirtualStream objOutStream = null;
                FileStream objInputFile = null, objOutFile = null;
                lock(objLock)
                    try
                        objInputFile = new FileStream(sFileName, FileMode.Open);
                        //objInputStream = new VirtualStream(objInputFile);
                        objOutFile = new FileStream(sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString(), FileMode.Create);
                        //objOutStream = new VirtualStream(objOutFile);
                        int nByteRead;
                        while ((nByteRead = objInputFile.ReadByte()) != -1)
                            if (nByteRead != (int)cRemoveChar)
                                objOutFile.WriteByte((byte)nByteRead);
                    finally
                        objInputFile.Close();
                        objOutFile.Close();
                    return sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString();
    ==================================
    however when I run the bulk load utility I get the error 
    =======================================
    Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).
    ==========================================
    the bulk insert statement is as follows
    =========================================
     BULK INSERT Temp  
     FROM '<file name>' WITH  
      FIELDTERMINATOR = ','  
      , KEEPNULLS  
    ==========================================
    Does anybody know what is happening and what needs to be done ?
    PLEASE HELP
    Thanks in advance 
    Vikram

    To load that file with BULK INSERT, use this format file:
    9.0
    4
    1 SQLCHAR 0 0 "\""      0 ""    ""
    2 SQLCHAR 0 0 "\",\""   1 col1  Latin1_General_CI_AS
    3 SQLCHAR 0 0 "\",\""   2 col2  Latin1_General_CI_AS
    4 SQLCHAR 0 0 "\"\r\n"  3 col3  Latin1_General_CI_AS
    Note that the format file defines four fields while the fileonly seems to have three. The format file defines an empty field before the first quote.
    Or, since you already have a .NET program, use a stored procedure with table-valued parameter instead. I have an example of how to do this here:
    http://www.sommarskog.se/arrays-in-sql-2008.html
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Bulk Load question for an insert statement.

    I'm looking to put the following statement into a FORALL statement using BULK COLLLECT and I need some guidance.
    Am I going to be putting the SELECT statement into a cursor and then load the cursor values into a defined Nested Table type defined variable?
    INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
    SELECT aor.associate_office_record_id ,
    sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
    FROM ASSOCIATE_OFFICE_RECORDS aor
    WHERE aor.OFFICE_ID = v_office_id
    AND (
    (aor.lt_assoc_stage_result_id in (4,8)
    AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
    OR aor.lt_assoc_stage_result_id in (1, 2)
    ));

    I see people are reading this so for the insanely curious here's how I did it.
    Type AOR_REC is RECORD(
    associate_office_record_id dbms_sql.number_table,
    week_id dbms_sql.number_table); --RJS.***Setting up Type for use with Bulk Collect FORALL statements.
    v_a_rec AOR_REC; -- RJS. *** defining variable of defined Type to use with Bulk Collect FORALL statements.
    CURSOR cur_aor_ids -- RJS *** Cursor for BULK COLLECT.
    IS
    SELECT aor.associate_office_record_id associate_office_record_id,
    sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
    FROM ASSOCIATE_OFFICE_RECORDS aor
    WHERE aor.OFFICE_ID = v_office_id
    AND (
    (aor.lt_assoc_stage_result_id in (4,8)
    AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
    OR aor.lt_assoc_stage_result_id in (1, 2)
    FOR UPDATE NOWAIT;
    BEGIN
    BEGIN
    OPEN cur_aor_ids;
    LOOP
    FETCH cur_aor_ids BULK COLLECT into
    v_a_rec.associate_office_record_id, v_a_rec.week_id; --RJS. *** Bulk Load your cursor data into a buffer to do the Delete all at once.
    FORALL i IN 1..v_a_rec.associate_office_record_id.COUNT SAVE EXCEPTIONS
    INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
    (associate_office_record_id,week_id)
    VALUES
    (v_a_rec.associate_office_record_id(i), v_a_rec.week_id(i)); --RJS. *** Single FORALL BULK DELETE statement.
    EXIT WHEN cur_aor_ids%NOTFOUND;
    END LOOP;
    CLOSE cur_aor_ids;
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line('ERROR ENCOUNTERED IS SQLCODE = '|| SQLCODE ||' AND SQLERRM = ' || SQLERRM);
    dbms_output.put_line('Number of INSERT statements that
    failed: ' || SQL%BULK_EXCEPTIONS.COUNT);
    End;
    Easy right?

  • OIM Bulk Load: Insufficient privileges

    Hi All,
    I'm trying to use the OIM Bulk Load Utility and I keep getting this error message:
    Exception in thread "main" java.sql.SQLException: ORA-01031: insufficient privileges
    ORA-06512: at "OIMUSER.OIM_BLKLD_SP_CREATE_LOG", line 39
    ORA-06512: at "OIMUSER.OIM_BLKLD_PKG_USR", line 281
    I've followed the instructions and gone over everything a few times. The utility tests the connection to the database OK.
    I don't know much about oracle db's so I am not sure how to do even basic troubleshooting. Could I just give my OIMUSER full permissions? Shouldn't it have full permission as it is?
    I did have to create a tablespace for this utility, maybe the OIMUSER needs to be give access to this? I have no idea....
    Any help would be greatly appreciated!
    Alex

    Even i got same error, at that time db oim user had following permission:
    CREATE TABLE
    CREATE VIEW
    QUERY REWRITE
    UNLIMITED TABLESPACE
    EXECUTE ON SYS.DBMS_SHARED_POOL
    EXECUTE ON SYS.DBMS_SYSTEM
    SELECT ON SYS.DBA_2PC_PENDING
    SELECT ON SYS.DBA_PENDING_TRANSACTIONS
    SELECT ON SYS.PENDING_TRANS$
    SELECT ON SYS.V$XATRANS$
    CONNECT
    RESOURCE
    Later DBA provided following additional permission and it worked like a charm:-
    CREATE ANY INDEX  
    CREATE ANY SYNONYM  
    CREATE ANY TRIGGER  
    CREATE ANY TYPE  
    CREATE DATABASE LINK  
    CREATE JOB  
    CREATE LIBRARY  
    CREATE MATERIALIZED VIEW  
    CREATE PROCEDURE  
    CREATE SEQUENCE  
    CREATE TABLE  
    CREATE TRIGGER  
    CREATE VIEW  

  • Unable to perform bulk load in BODS 3.2

    Hi
    We have upgraded our Development server from BODS 3.0 to BODS 3.2. There is a dataflow wherein the job uses the Bulk load option. The job is giving warnings at that dataflow and all the data is shown as warnings in the log. No data is loaded to the Target table. We have recently migrated SQL Server 2005 to SQL Server 2008. Will someone let me know why the Bulk load option is not working in BODS 3.2
    Kind Regards,
    Mahesh

    Hi,
    I want to upgrade SQL Server 2005 to SQL server 2008 with BODS 4.0.
    I want to know the recommandations for do it.
    - How to use SQL Server 2008 with Bods?
    - What are the performece on SQL server 2008?
    - What are the things to evaluate?
    - Is it necessary migrate with BackUp restore mode ?
    - What are the step of migration?
    - Can we merge the disabled in BODS?

  • Sqlldr bulk loading

    i'm having a strange problem bulk loading a about 15k+ rows into an existing spatial table.
    i use a perl script to 'watch' a directory intto which the usgs pushes a file. The first line in the file gets parsed and entered into a table (no problem here). For the rest of the file i generate a sqlldr control file that appends the data into my table. After the insert i do a transform of the lat, long columns and make an sdo_point like so:
    $upstmt = $dbh->prepare("update shake_xyz set shape = dd832utm(lon,lat) where id = ?" );
    $upstmt->bind_param(1,$id);     
    $upstmt->execute() or die "cant update! $DBI::errstr\n";
    if a spatial index does not exist on the table, theres no problem.
    If one does and i dont use a direct path load, theres no problem (except that this takes waaaaay longer than i'd like).
    When i try to do a direct path load with an spatial index, i get the following rather distressing message
    SQL*Loader-926: OCI error while uldlfca:OCIDirPathColArrayLoadStream for table SHAKE_XYZ
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (ptc_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (pts_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (pts_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (sbpt_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (sbpt_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (pts_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (sbpt_hp)
    ORA-24338: statement handle not executed
    ive truncated the message, it keeps repeating for another 50+ lines
    here is what my input file looks like:
    elvis[kxv4]{8}%     head grid-9.xyz     
    51111345 5.5 39.81 -120.64 AUG 10 2001 20:19:27 GMT -121.883 38.9833 -119.383 40.65 (Process time: Thu Sep 26 09:24:45 2002)
    -121.8833 40.6500 0.9135 0.4790 3.0900 1.3490 0.5076 0.1056
    -121.8666 40.6500 0.7209 0.3115 2.8600 1.0659 0.3301 0.0688
    -121.8500 40.6500 0.7195 0.3110 2.8600 1.0650 0.3297 0.0689
    -121.8333 40.6500 0.9087 0.4771 3.0800 1.3465 0.5059 0.1059
    -121.8166 40.6500 0.7170 0.3102 2.8600 1.0638 0.3290 0.0690
    -121.8000 40.6500 0.7161 0.3100 2.8600 1.0640 0.3287 0.0690
    -121.7833 40.6500 0.7156 0.3098 2.8600 1.0649 0.3287 0.0690
    -121.7666 40.6500 0.7156 0.3099 2.8600 1.0666 0.3288 0.0690
    -121.7500 40.6500 0.7162 0.3103 2.8600 1.0693 0.3292 0.0690
    here is what my control file looks like:
    elvis[kxv4]{10}% head -20 51111345grid-9.xyz
    LOAD DATA
    INFILE *
    INTO TABLE shake_xyz
    APPEND
    FIELDS TERMINATED BY ',' TRAILING NULLCOLS
    ( ID,
    LON,
    LAT,
    PGA,
    PGV,
    MMI,
    PSA1 NULLIF PSA1=BLANKS,
    PSA2 NULLIF PSA2=BLANKS,
    PSA3 NULLIF PSA3=BLANKS,
    OBJECTID SEQUENCE(MAX,1)
    BEGINDATA
    51111345,-121.8833,40.6500,0.9135,0.4790,3.0900,1.3490,0.5076,0.1056
    51111345,-121.8666,40.6500,0.7209,0.3115,2.8600,1.0659,0.3301,0.0688
    for earthquakes with magnitude less than 5 the PSA% columns are blank ...
    i'm woefully ignorant of sqlldr and how it works with indexes
    any advice appreciated
    thanks
    --kassim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    thanks dan,
    thats a great idea (im using 9.2.0.4)
    each event is assigned a unique id by usgs. But this number does not look like a sequence (higher numbers do not correspond to more recent events), but i also assign a sequence number to the event_id which i keep track of in an event table which has only one entry for each event instead of the 15k entries for the shake table.
    time to read up on partitions...
    thanks again; you rock!
    --kassim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Bulk Loading with Availability Group

    We have a critical database that is bulk loaded every 30 minutes.  The database is currently in simple recovery model for the obvious reason of keeping the transaction log manageable.  I would like to add it to our SQL Server 2012 availability
    group so the database is always available.  It has to be set to full for the recovery model.  Is there a good way to keep the transaction log from getting unwieldy without doing backups as often as the loads take place?  The database is a little
    over a GB and will only grow about 1-2% a month.
    Thor

    If your load is high, then you can plan like if it works, if the db size is small plan full backup daily durning non-load hours,but log backup you need to schedule the log backup frequently inorder to reduce the log usuage.
    if the database size huge, plan daily differential _weekly once full backup+frequent logbackups(everyday -you need to choose the timings).
    Thanks, Rama Udaya.K (http://rama38udaya.wordpress.com) ---------------------------------------- Please remember to mark the replies as answers if they help and UN-mark them if they provide no help,Vote if they gives you information.

  • Bulk Loader Program to load large xml document

    I am looking for a bulk loader database program that will load a very large xml document. The simple bulk loader application available on the oracle site will not load this document due to its size which is approximately 20MG. Please advise asap. Thank you.

    From the above document:
    Storing XML Data Across Tables
    Question
    Can XML- SQL Utility store XML data across tables?
    Answer
    Currently XML-SQL Utility (XSU) can only store to a single table. It maps a canonical representation of an XML document into any table/view. But of course there is a way to store XML with the XSU across table. One can do this using XSLT to transform any document into multiple documents and insert them separately. Another way is to define views over multiple tables (object views if needed) and then do the inserts ... into the view. If the view is inherently non-updatable (because of complex joins, ...), then one can use INSTEAD-OF triggers over the views to do the inserts.
    -- I've tried this, works fine.

Maybe you are looking for

  • View requirements for delivered sales orders in MD4C.

    MD4C shows requirements for open sales orders and other related data but if a delivery has been created for the order which covers complete sales order item quantity it is excluded in the report. I want to view such requirements too. Is there a possi

  • Out of the box approval workflow suddenly failed on start (retrying)

    SharePoint 2010. I've been using out of the box approval workflow for several month now. I did some minor changes to it, like to who and when emails should be send, but that's all. Then out of a sudden it stopped working. New request status is "Faile

  • Website picture sizes are not being cached when I click on their properties

    Since updating to Firefox 18 on my wife's laptop when she clicks on "View image info" the size is shown as "unknown (not cached)". When I try on my desktop the Kb of pictures are showing up as normal. The only difference between the two is the deskto

  • Can't create new instance

    I have database server 9i R2 on Windows 2000 (and an application server 9i R2, too). I tried to create another database instance through the Database Configuration assistant, but it doesn't start!! (the hourglass appears for half a second and then go

  • Mime-type question Gnome

    In Gnome I noticed something very strange. If I take an HTML file and name it helloworld.htm or helloworld.html then it shows up as a Mozilla bookmark with a mozilla bookmark icon. If I change the name to just "helloworld" then it lists as an html pa