Bulk Insert - Commit Frequency

Hi,
I am working on OWB 9.2.
I have a mapping that would be inserting > 90000 records in target table.
When i execute the SQL query generated by OWB in Database it return result in 6 minutes.
But when i executed this OWB mapping in 'Set based fail over row based' with Bulk-size = 50, commit frequency = 1000 and max error = 50 it is taking almost 40 minutes. Now while this mapping was running i checked the target table records count and it was increasing by 50 at a time and it is taking long time to finish.
What changes I need to do to make this inserting of records Fastest.

If it is inserting 50 a time, it means that setbased is failing and then it is switching to rowbased.
If it is always failing in set based, it will be better to run in row based so as not spending time in set based.
In row based mode, you can increase bulk size (like 5000 or 10000 or more ) for efficient execution. Keep commit freq same as bulk size.

Similar Messages

  • [Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line

    Introduction
    Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
    Solution
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
    as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line,
    bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
    For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
    bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
    The txt file as follows:
    However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    The txt file as follows:
    When multiple field terminators means multiple fields, you use the below comma separated format,
    column1,,column2,,,column3
    In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
    Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
     WITH (
    DATAFILETYPE = ' char/native/ widechar /widenative',
     FIELDTERMINATOR = ' field_terminator',
    For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,:’,
    The new table contains like as follows:  
    We could not declare multiple field terminators (, and :) in the Query statement,  as the following format, a duplicate error will occur.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,’,
    FIELDTERMINATOR = ‘:’
    However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.  
    More Information
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
    Applies to
    SQL Server 2012
    SQL Server 2008R2
    SQL Server 2005
    SQL Server 2000
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue

    Oracle 10.2.0.4
    I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
    Here's what I'm doing:
    1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
    2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
    Here's the "test" procedure I created to test my scenario:
    View: V_TEST_TABLE
    Underlying Table: TEST_TABLE
    PROCEDURE BulkTest
    IS
    TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
    varRemData remDataType;
    begin
    select /*+ DRIVING_SITE(r)*/ *
    BULK COLLECT INTO varRemData
    from TEST_TABLE@REMOTE_LINK
    where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
    and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
    BEGIN
    FORALL idx IN varRemData.FIRST .. varRemData.LAST
    INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
    EXCEPTION WHEN others THEN
    DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
    END;
    COMMIT;
    end;
    I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
    Any help would be appreciated....
    Thanks,
    Steve

    Thanks, obviously this is my first post, I'm desperate to figure out why this won't work....
    This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
    Thanks,
    Steve

  • How to use BULK INSERT for a data from a cursor?

    Oracle 10G enterprise edition.
    I tried to Bulk insert datas returning from a cursor, its returning error.
    PLS-00302: component 'LAST' must be declared
    I need some help to use the Bulk INSERT here.Can any one help me to specify what error i have made?
    CREATE OR REPLACE PROCEDURE HOT_ADMIN.get_search_keyword_stats_prc
    IS
    CURSOR c_get_scenarios
    IS
    SELECT a.*,ROWNUM rnum
    FROM (
    SELECT TRUNC(r.search_date) sdate,
    r.search_hits hits,
    r.search_type stype,
    r.search_qualification qual,
    r.search_location loc,
    r.search_town stown,
    r.search_postcode pcode,
    r.search_college college,
    r.search_colname colname,
    r.search_text text,
    r.affiliate_id affiliate,
    r.search_study_mode smode,
    r.location_hint hint,
    r.search_posttown ptown,
    COUNT(1) cnt
    FROM w_search_headers r
    WHERE search_text IS NOT NULL
    AND NVL(search_type,' ') <> 'C'
    AND TRUNC(search_date)= TO_DATE(TO_CHAR(SYSDATE-1,'DD-MON-RRRR'))
    GROUP BY TRUNC(r.search_date),
    r.search_hits,
    r.search_type,
    r.search_qualification,
    r.search_location,
    r.search_town,
    r.search_postcode,
    r.search_college,
    r.search_colname,
    r.search_text,
    r.affiliate_id,
    r.search_study_mode,
    r.location_hint,
    r.search_posttown
    ORDER BY cnt desc
    ) a
    WHERE ROWNUM <=1000;
    lc_get_data c_get_scenarios%ROWTYPE;
    BEGIN
    OPEN c_get_scenarios;
    FETCH c_get_scenarios into lc_get_data;
    CLOSE c_get_scenarios;
    FORALL i IN 1..lc_get_data.last
    INSERT INTO W_SEARCH_SCENARIO_STATS VALUES ( i.sdate,
    i.hits,
    i.stype,
    i.qual,
    i.loc,
    i.stown,
    i.pcode,
    i.college,
    i.colname,
    i.text,
    i.affiliate,
    i.smode,
    i.hint,
    i.ptown,
    i.cnt
    COMMIT;
    END;

    This isn't what you asked, but I've generally found it helpful to list the columns in an INSERT statement before the values. It is of course optional, but useful for reference when looking at the statement later

  • SQL Server 2012 Express bulk Insert flat file 1million rows with "" as delimeter

    Hi,
    I wanted to see if anyone can help me out. I am on SQL server 2012 express. I cannot use OPENROWSET because my system is x64 and my Microsoft office suit is x32 (Microsoft.Jet.OLEDB.4.0).
    So I used Import wizard and is not working either. 
    The only thing that let me import this large file, is:
    CREATE TABLE #LOADLARGEFLATFILE
    Column1
    varchar(100), Column2 varchar(100), Column3 varchar(100),
    Column4 nvarchar(max)
    BULK INSERT
    #LOADLARGEFLATFILE
    FROM 'C:\FolderBigFile\LARGEFLATFILE.txt'
    WITH 
    FIRSTROW = 2,
    FIELDTERMINATOR ='\t',
    ROWTERMINATOR ='\n'
    The problem with CREATE TABLE and BULK INSERT is that my flat file comes with text qualifiers - "". Is there a way to prevent the quotes "" from loading in the bulk insert? Below is the data. 
    Column1
    Column2
    Column3
    Column4
    "Socket Adapter"
    8456AB
    $4.25
    "Item - Square Drive Socket Adapter | For "
    "Butt Splice" 
    9586CB
    $14.51
    "Item - Butt Splice"
    "Bleach"
    6589TE
    $27.30
    "Item - Bleach | Size - 96 oz. | Container Type"
    Ed,
    Edwin Lopera

    Hi lgnusLumen,
    According to your description, you use BULK INSERT to import data from a data file to the SQL table. However, to be usable as a data file for bulk import, a CSV file must comply with the following restrictions:
    1. Data fields never contain the field terminator.
    2. Either none or all of the values in a data field are enclosed in quotation marks ("").
    In your data file, the quotes aren't consistent, if you want to prevent the quotes "" from loading in the bulk insert, I recommend you use SQL Server Import and Export Wizard tools in SQL Server Express version. area, it will allow to strip the
    double quote from columns, you can review the following screenshot.
    In other SQL Server version, we can use SQL Server Integration Services (SSIS) to import data from a flat file (.csv) with removing the double quotes. For more information, you can review the following article.
    http://www.mssqltips.com/sqlservertip/1316/strip-double-quotes-from-an-import-file-in-integration-services-ssis/
    In addition, you can create a function to convert a CSV to a usable format for Bulk Insert. It will replace all field-delimiting commas with a new delimiter. You can then use the new field delimiter instead of a comma. For more information, see:
    http://stackoverflow.com/questions/782353/sql-server-bulk-insert-of-csv-file-with-inconsistent-quotes
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • First Row Record is not inserted from CSV file while bulk insert in sql server

    Hi Everyone,
    I have a csv file that needs to be inserted in sql server. The csv file will be format will be like below.
    1,Mr,"x,y",4
    2,Mr,"a,b",5
    3,Ms,"v,b",6
    While Bulk insert it coniders the 2nd column as two values (comma separte) and makes two entries .So i used filelterminator.xml.  
    Now, the fields are entered into the column correctly. But now the problem is, the first row of the csv file is not reading in sql server. when i removed the  terminator,  i can get the all records. But i must use the above code terminator. If
    am using means, am not getting the first row record.
    Please suggests me some solution.
    Thanks,
    Selvam

    Hi,
    I have a csv file (comma(,) delimited) like this which is to be insert to sql server. The format of the file when open in notepad like below:
    Id,FirstName,LastName,FullName,Gender
    1,xx,yy,"xx,yy",M
    2,zz,cc,"zz,cc",F
    3,aa,vv,"aa,vv",F
    The below is the bulk insert query which is used for insert above records,
    EXEC(BULK INSERT EmployeeData FROM '''+@FilePath+'''WITH
    (formatfile=''d:\FieldTerminator.xml'',
    ROWTERMINATOR=''\n'',
    FIRSTROW=2)'
    Here, I have used format file for the "Fullname" which has comma(,) within the field. The format file is:
    The problem is , it skip the first record (1,xx,yy,"xx,yy",M) when i use the format file. When i remove the format file from the query, it takes all the records but the "fullName" field makes the problem because of comma(,) within the
    field. So i must use the format file to handle this. So please suggest me , why the first record skipped always when i use the above format file.
    If i give the "FirstRow=1" in bulk insert, it shows the "String or binary data would be truncated.
    The statement has been terminated." error. I have checked the datatype length.
    Please update me the solution.
    Regards,
    Selvam. M

  • BCP-style bulk insert from remote C++ ODBC Native client application

    I am trying to find documentation or sample code for performing bulk inserts into SQL Server 2012 from a remote client using the ODBC native client driver from Linux.  We currently perform INSERT statements on blocks of data, wrapping it in BEGIN/COMMIT,
    and achieving through approximately half of bcp reading from a delimited text file.  While there are many web pages talking about bulk inserts via the native driver, this page (http://technet.microsoft.com/en-us/library/ms130792.aspx) seems closest to
    what I'm after but doesn't go into any detail or give API calls.  The referenced header file is just a bunch of options and constants, so presumablyone gains access to bulk functions via the standard ODBC mechanism, the question is how.
    For clarity, I am NOT interested in:
    BULK INSERT: because it requires a server-side data file or a UNC path with appropriate permissions (doesn't work from Linux)
    INSERT ... SELECT
    * FROM OPENROWSET(BULK...): same problem as above
    IRowsetFastload: OLEDB, but I need ODBC on Linux.
    Basically, I want to emulate BCP.  I don't want to *run* BCP because it requires landing data to disk. 
    Thanks
    john
    John Lilley Chief Architect RedPoint Global Inc.

    Other than block inserts within BEGIN/COMMIT transaction blocks or running bcp, is there anything else that can be done on Linux?
    No other option from Linux that I am aware of.  The SQL Server Native Client ODBC driver also supports table-valued-parameters, which can be used to stream data but the Linux ODBC driver API doesn't have a way to do that either.  That said, I would
    still expect file-based BCP to significantly outperform inserts with large batches.  I've seen a rate of 100K/sec. with this technique, including the file create overhead but much depends on the particulars of your use case.
    Consider voting for this on Connect.  BCP is on the roadmap but no date yet: 
    https://connect.microsoft.com/SQLServer/SearchResults.aspx?SearchQuery=linux+odbc+bcp
    Also, I filed a Connect item for TVP support:
    https://connect.microsoft.com/SQLServer/feedback/details/874616/add-tvp-support-to-sql-server-odbc-driver-for-linux
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Simple insert or bulk insert

    Hi,
    Hi have a problem regarding the bulk insert.
    I have 4 tables and each table has 16 partitions and each table near 2,00,000 records i am inserting
    Previously i tried bulk insert ..
    open insert_tab1;
    loop
    fetch insert_tab1 bulk collect into type_insert_tab1 limit 40000; --- fetch is near 2,00,000 records
    forall i in 1..type_insert_tab1.count
    insert into tab1 values type_insert_tab1(i);
    commit;
    exit when insert_tab1%notfound;
    end loop;
    Now similar insert i did for three other tables with the commit statementand in each table approx 2,00,000 records we are inserting.
    But i got snapshot too old error.how i can modify this to reduce commit and buffer use an dless execution time.
    or shall i take only insert into tab1
    (col1,
    col7)
    select *from tab2;
    Thanks in advance

    But i got snapshot too old error.how i can modify this to reduce commit and buffer use an dless execution time.You can reduce the number of commits by taking the commit out of the loop.
    It might be worth looking at the execution plan of the cursor in case it is less efficient than it could be.
    @ user11087632:
    Direct path insert uses fewer system resources, not more.
    The target table would need to be defined as NOLOGGING to get the full benefit; also indexes will affect direct path performance, and any foreign key constraints or row-level triggers will make it silently revert to conventional insert. And of course you lose the logging.
    Edited by: William Robertson on Jun 4, 2009 11:23 PM

  • Cannot fetch a row from OLE DB provider "BULK" with bulk insert task

    Hi, folks:
    I created a simple SSIS package. On the Control Flow, I created a Bulk INsert Task with Destination connection to a the local SQL server, a csv file from a local folder, specify comma delimiter. Then I excute the task and I got this long error message.
    [Bulk Insert Task] Error: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.".

    I got the same error with some additional error details (below).  All I had to do to fix the problem was set the Timeout property for the SQL Server Destination = 0
    I was using the following components:
    SQL Server 2008
    SQL Server Integration Services 10.0
    Data Flow Task
    OLE DB Source – connecting to Oracle 11i
    SQL Server Destination – connecting to the local SQL Server 2008 instance
    Full Error Message:
    Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E14.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "The Bulk Insert operation of SQL Server Destination has timed out. Please consider increasing the value of Timeout property on the SQL Server Destination in the dataflow.".
    For SQL Server 2005 there is a hot fix available from Microsoft at http://support.microsoft.com/default.aspx/kb/937545

  • Sequence in Bulk Insert

    Hi All
    Just one doubt,
    whether sequence can be used in Bulk insert query ?

    Have you tried creating a test case? See below:
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL > CREATE table test_tab(id NUMBER, ename VARCHAR2(10));
    Table created.
    SQL > CREATE SEQUENCE test_seq;
    Sequence created.
    SQL > DECLARE
      2          TYPE e_list is TABLE OF test_tab.ename%TYPE INDEX BY PLS_INTEGER;
      3
      4          emps e_list;
      5  BEGIN
      6          emps(1) := 'Jeff';
      7          emps(2) := 'Jane';
      8          emps(3) := 'Joe';
      9
    10          FORALL i IN emps.FIRST .. emps.LAST
    11                  INSERT INTO test_tab(id, ename) VALUES(test_seq.NEXTVAL,emps(i));
    12
    13          COMMIT;
    14  END;
    15  /
    PL/SQL procedure successfully completed.
    SQL > SELECT * FROM test_tab;
                      ID ENAME
                       2 Jeff
                       3 Jane
                       4 Joe
    SQL > DROP SEQUENCE test_seq;
    Sequence dropped.
    SQL > DROP TABLE test_tab PURGE;
    Table dropped.

  • Query on Bulk Insert

    I am using bulk insert to load data into some tables.What limit i should specify for commit to be done, in order to have the best possible performance.

    I don't think there is any fixed sort of rule is there to identify the limit numbers.
    As Morgan suggested 2500 or the mentioned number for your purpose.
    Kindly check the following links ->
    http://www.psoug.org/reference/array_processing.html
    http://www.oracle.com/technology/oramag/oracle/08-mar/o28plsql.html
    http://www.dba-oracle.com/plsql/t_plsql_limit_clause.htm
    Regards.
    Satyaki De.

  • FORALL bulk insert ..strange behaviour

    Hi all..
    I have the following problem..
    I use a FORALL bulk Insert statement to insert a set of values using a collection that has only one row. The thing is I get a ' ORA-01400: cannot insert NULL into <schema>.<table>.<column>' error message, whereas the row has been inserted into the table!
    Any ideas why this is happening?

    Here is the sample code..
    te strange thing is that the cursor has 1 row and the array gets also 1 row..
    FUNCTION MAIN() RETURN BOOLEAN IS
    -- This cursor retrieves all necessary values from CRD table to be inserted into PDCS_DEFERRED_RELATIONSHIP table
    CURSOR mycursor IS
    SELECT key1,
    key2,
    column1,
    date1,
    date2,
    txn_date
    FROM mytable pc
    WHERE
    -- create an array and a type for the scancrd cursor
    type t_arraysample IS TABLE OF mycursor%ROWTYPE;
    myarrayofvalues t_arraysample;
    TYPE t_target IS TABLE OF mytable%ROWTYPE;
    la_target t_target := t_target();
    BEGIN
    OPEN mycursor;
    FETCH mycursor BULK COLLECT
    INTO myarrayofvalues
    LIMIT 1000;
    myarrayofvalues.extend(1000);
    FOR x IN 1 .. myarrayofvalues.COUNT
    LOOP
    -- fetch variables into arrays
    gn_index := gn_index + 1;
    la_target(gn_index).key1 := myarrayofvalues(x).key1;
    la_target(gn_index).key2 := myarrayofvalues(x).key2;
    la_target(gn_index).column1 := myarrayofvalues(x).column1;
    la_target(gn_index).date1 := myarrayofvalues(x).date1;
         la_target(gn_index).date2 := myarrayofvalues(x).date2;
    la_target(gn_index).txn_date := myarrayofvalues(x).txn_date;
    END LOOP;
    -- call function to insert/update TABLE
    IF NOT MyFunction(la_target) THEN
    ROLLBACK;
    RAISE genericError;
         ELSE COMMIT;
    END IF;
    CLOSE mycursor;
    END IF;
    FUNCTION MyFunction(t_crd IN t_arraysample) return boolean;
    DECLARE
    BEGIN
    FORALL x IN la_target.FIRST..la_target.LAST
    INSERT INTO mytable
    VALUES la_target(x);
    END IF;

  • How can I debug a Bulk Insert error?

    I'm loading a bunch of files into SQL server.  All work fine, but one keeps erroring out on me.  All files should be exactly the same in structure, but they have different dates, and other different financial metrics, but the structure and field
    names should be exactly the same.  Nevertheless, one keeps konking out, and throwing this error.
    Msg 4832, Level 16, State 1, Line 1
    Bulk load: An unexpected end of file was encountered in the data file.
    Msg 7399, Level 16, State 1, Line 1
    The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
    Msg 7330, Level 16, State 2, Line 1
    Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
    The ROWTERMINATOR should be CRLF, and when you look at it in Notepad++ that's what it looks like, but it must be something else, because I keep getting errors here.  I tried the good old:  ROWTERMINATOR='0x0a'
    That works on all files, but one, so there's something funky going on here, and I need to see what SQL Server is really doing.
    Is there some way to print out a log, or look at a log somewhere?
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    The first thing to try is to see if BCP likes the file. BCP and BULK INSERT adhere to the same spec, but they are different implementations, but there are subtle differences.
    There is an ERRORFILE option, but it more helps when there is bad data.
    You can also use the BATCHSIZE option to see how many records in the file it swallows, before things go bad. FIRSTROW and LASTROW can also help.
    All in all, it can be quite tedious find that single row where things are different - and where BULK INSERT loses sync entirely. Keep in mind that it reads fields on by one, and it there is one field terminator to few on a line, it will consume the line
    feed at the end of the line as data.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

Maybe you are looking for