Bulk Insert Issue with BCP

I'm running SQL Server 2008 R2 and trying to test out bcp in one of our databases. For almost all the tables, the bcp and bulk insert work fine using similar commands below.  However on a few tables I am experiencing an issue when trying to Bulk Insert
in.
Here are the details:
This is the bcp command to export out the data (via simple batch file):
 1.)
SET OUTPUT=K:\BCP_FIN_Test
SET ERRORLOG=C:\Temp\BCP_Error_Log
SET TIMINGS=C:\Temp\BCP_Timings
bcp "SELECT * FROM FS84RPT.dbo.PS_PO_LINE Inner Join FS84RPT.[dbo].[PS_RECV_LN_ACCTG] on PS_PO_LINE.BUSINESS_UNIT = PS_RECV_LN_ACCTG.BUSINESS_UNIT_PO and PS_PO_LINE.PO_ID= PS_RECV_LN_ACCTG.PO_ID and PS_PO_LINE.LINE_NBR= PS_RECV_LN_ACCTG.LINE_NBR WHERE
PS_RECV_LN_ACCTG.FISCAL_YEAR = '2014' and PS_RECV_LN_ACCTG.ACCOUNTING_PERIOD BETWEEN '9' AND '11' " queryout %OUTPUT%\PS_PO_LINE.txt -e %ERRORLOG%\PS_PO_LINE.err -o %TIMINGS%\PS_PO_LINE.txt -T -N
 2.)
BULK INSERT PS_PO_LINE FROM 'K:\BCP_FIN_Test\PS_PO_LINE.txt' WITH (DATAFILETYPE = 'widenative')
Msg 4869, Level 16, State 1, Line 1
The bulk load failed. Unexpected NULL value in data file row 2, column 22. The destination column (CNTRCT_RATE_MULT) is defined as NOT NULL.
Msg 4866, Level 16, State 4, Line 1
The bulk load failed. The column is too long in the data file for row 3, column 22. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I've tried a few different things including trying to export as character and import as BULK INSERT PS_PO_LINE FROM 'K:\BCP_FIN_Test\PS_PO_LINE.txt' WITH (DATAFILETYPE = 'char')
But no luck
Appreciate help

It seems that the target table does not match your expectations.
Since I don't know exactly what you are doing, I will have to resort to guesses.
I note that you export query goes:
  SELECT * FROM FS84RPT.dbo.PS_PO_LINE Inner Join
And then you are importing into a table called PS_PO_LINE as well. But for your operation to make sense the import PS_PO_LINE must not only have the columns from the PS_PO_LINE, but also all columns from PS_RECV_LN_ACCTG. Maybe your SELECT should read
  SELECT PS_PO_LINE.* FROM FS84RPT.dbo.PS_PO_LINE Inner Join
or use an EXISTS clause to add the filter of PS_RECV_LN_ACCTG table. (Assuming that it appears in the query for filtering only.)
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • Bulk API issue with contact imports

    Is the bulk API having validation issues? I can update any existing or create any new imports.
    Simply posting the content below from the tutorial now results in a validation error:
    "name": "Docs Import Example",
    "fields": {
    "firstName": "{{Contact.Field(C_FirstName)}}",
    "lastName": "{{Contact.Field(C_LastName)}}",
    "emailAddress": "{{Contact.Field(C_EmailAddress)}}"
    "identifierFieldName": "emailAddress",
    "isSyncTriggeredOnImport" : "false"
    Here is the error:
    "failures":[{"field":"identifierFieldName","constraint":"Must be a string value, at least 1 character and at most 100 characters long."},{"field":"name","constraint":"Must be a string value, at least 1 character and at most 100 characters long."},{"field":"fields","constraint":"Is required."}]}

    Seems like an issue with UD_ADUSER_LOCKED field value. Change it to non null value and retry.

  • ASE157 - Permission issue with BCP out under Solaris

    With ASE15.7 SP60 on Solaris10 u11, when data is bulk copied out using Sybase’s bcp utility, the output file generated is having permissions Read/Write for the owner and Read for the group and no permissions for others even if the umask specifies different permissions.
    sybase15@server:/sybdata2/backup
    !> umask
    0022
    If I try to create a file, I get the expected file permissions:
    sybase15@server:/sybdata2/backup
    !> touch test
    sybase15@server:/sybdata2/backup
    !> ls -l test
    -rw-r--r--   1 sybase15 sybase         0 Mar 21 16:59 test
    But bcpout grant different permissions:
    sybase15@server:/sybdata2/backup
    !> bcp db..table out /sybdata2/backup/table.bcp -Sservername -Uuser -Pxxx -c -t'(¨)' -r'(¯)\n'
    Starting copy...
    27 rows copied.
    Clock Time (ms.): total = 16 Avg = 0 (1687.50 rows per sec.)
    sybase15@server:/sybdata2/backup
    !> ls -l
    -rw-r----- 1 sybase15 sybase 3150 Mar 21 16:42 table.bcp
    Any idea?

    This was a deliberate change made under CR  683458 in BCP version 15.7 ESD 4, to adopt SAP's more stringent "secure by default" policy.
    A new feature has been developed that gives bcp a --filemode option that can be used to specify a less restrictive permission setting.  This new feature becomes available in the connectivity 15.7 SP120 and 16.0 GA C1 releases.
    Documentation: --filemode Option for isql and bcp
    -bret

  • Insert issue with connection to MS Access

    Hi
    New to Java and this forum, if I'm in the wrong forum for this question please direct me to the correct one, thanks.
    I'm able to connect and query my table no problem but when I try to insert a record with a date field it fails. I've spent the last two days googling the internet and trying every example I can find but I cannot get this to work. Here's where I left off, assume I've made my connection to the database.
    public void insertRecord()
    java.sql.Date dt = java.sql.Date.valueOf("1998-12-25");
    String sql = "INSERT INTO table1 ( name, date ) " +
    "VALUES ( ?, " + dt + " )";
    try
    PreparedStatement ps = connection.prepareStatement(sql);
    ps.setString(1, "Linn");
    ps.executeUpdate();
    catch ( Exception e )
    System.err.println( "Failed to insert record." );
    } // end method insertRecord
    Can someone show me where I'm going wrong here? Or can someone provide me with a simple example that actually works?
    My Access table has three fields:
    id <autonumber>
    name <text(50)>
    date <date/time custom format yyyy,mm,dd>
    Thanks in advance,
    Linn

    More specific
    String sql = "INSERT INTO table1 ( name, date ) " +
    "VALUES ( ?, #" + dt + " #)";Hi,
    Thanks for the response, I tried your suggestion but
    it still doesn't work for me.
    Anything else I can try?
    Thanks much,
    LinnAh, got it! I remembered seeing a posting elsewhere that suggested putting square brackets around the field name "date", like this [date]. I don't recall why they said this was necessary though. Tried that and it looks like it's working. To complete this for anyone else who finds this thread here's the final insert command.
    String sql = "INSERT INTO table1 ( name, [date] ) " +
    "VALUES ( ?, #" + dt + "#)";
    This worked for me.
    Thanks again,
    Linn

  • Inserting issue with MS Access

    Hi,
    I am trying to insert a record in MS Access. Following is the code.
    class SqlTest {
    public static void main(String args[]) throws Exception {
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection con = DriverManager.getConnection("jdbc:odbc:myDsn");
    Statement stmt = con.createStatement();
    stmt.executeUpdate("insert into emp values('e999', 'Raj')");
    After executing the program, if I check the data base the record would not have been inserted. The program does not even throw any exception. If I execute the same program with ORACLE, it works fne and record gets inserted. Could some one please let me know what could be the problem?

    Hey guys,
    I also tried executing the query directly in MS Access and it just worked fine.
    Also I got work around for my problem. If I extract data after I insert it in the same program, then the record gets inserted. May be it has got do with COMMIT operation. But what if I have to do only INSERT operation without selecting data?

  • Cannot fetch a row from OLE DB provider "BULK" with bulk insert task

    Hi, folks:
    I created a simple SSIS package. On the Control Flow, I created a Bulk INsert Task with Destination connection to a the local SQL server, a csv file from a local folder, specify comma delimiter. Then I excute the task and I got this long error message.
    [Bulk Insert Task] Error: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.".

    I got the same error with some additional error details (below).  All I had to do to fix the problem was set the Timeout property for the SQL Server Destination = 0
    I was using the following components:
    SQL Server 2008
    SQL Server Integration Services 10.0
    Data Flow Task
    OLE DB Source – connecting to Oracle 11i
    SQL Server Destination – connecting to the local SQL Server 2008 instance
    Full Error Message:
    Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E14.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E14  Description: "The Bulk Insert operation of SQL Server Destination has timed out. Please consider increasing the value of Timeout property on the SQL Server Destination in the dataflow.".
    For SQL Server 2005 there is a hot fix available from Microsoft at http://support.microsoft.com/default.aspx/kb/937545

  • Unique constraint violation error while bulk insert (TimesTen 7.0.1.0.0)

    Hi,
    I try to understand while I get error above when doing bulk inserts via TimesTen into Oracle database.
    The single inserts after that works perfect (no record will dropped), when doing direct to oracle bulk insert works with same data perfect. We tried it with AWT, then with readonly tables and passtrough=2, nothing works. The error comes even the table is empty.
    Any suggestions? is this a bug in timesten itself? Have I connect for bulk inserts direct to oracle database?
    Thanks in advanced
    Rajko Albrecht

    It may be a TT bug but if so it is not an obvious one since bulk inserts definitely work in TimesTen...
    Can you please provide:
    1. The schema of the table in question (including any indices)
    2. Details on how you are doing the bulk inserts (C/ODBC program, Java/JDBC program or ...). Actual program source code would be helpful.
    3. A (small) example of the data that you know would give this error.
    Thanks,
    Chris

  • Blob truncated with DbFactory and Bulk insert

    Hi,
    My platform is a Microsoft Windows Server 2003 R2 Server 5.2 Service Pack 2 (64-bit) with an Oracle Database 11g 11.1.0.6.0.
    I use the client Oracle 11g ODAC 11.1.0.7.20.
    Some strange behavior appends when used DbFactory and bulk command with Blob column and parameter with a size larger than 65536bytes. Let me explain.
    First i create a dummy table in my schema :
    create table dummy (a number, b blob)To use bulk insert we can use the code A with oracle object (succes to execute) :
    byte[] b1 = new byte[65530];
    byte[] b2 = new byte[65540];
    Oracle.DataAccess.Client.OracleConnection conn = new Oracle.DataAccess.Client.OracleConnection("User Id=login;Password=pws;Data Source=orcl;");
    OracleCommand cmd = new OracleCommand("insert into dummy values (:p1,:p2)", conn);
    cmd.ArrayBindCount = 2;
    OracleParameter p1 = new OracleParameter("p1", OracleDbType.Int32);
    p1.Direction = ParameterDirection.Input;
    p1.Value = new int[] { 1, 2 };
    cmd.Parameters.Add(p1);
    OracleParameter p2 = new OracleParameter("p2", OracleDbType.Blob);
    p2.Direction = ParameterDirection.Input;
    p2.Value = new byte[][] { b1, b2 };
    cmd.Parameters.Add(p2);
    conn.Open(); cmd.ExecuteNonQuery(); conn.Close();We can write the same thing with an abstract level when used the DbProviderFactories (code B) :
    var factory = DbProviderFactories.GetFactory("Oracle.DataAccess.Client");
    DbConnection conn = factory.CreateConnection();
    conn.ConnectionString = "User Id=login;Password=pws;Data Source=orcl;";
    DbCommand cmd = conn.CreateCommand();
    cmd.CommandText = "insert into dummy values (:p1,:p2)";
    ((OracleCommand)cmd).ArrayBindCount = 2;
    DbParameter param = cmd.CreateParameter();
    param.ParameterName = "p1";
    param.DbType = DbType.Int32;
    param.Value = new int[] { 3, 4 };
    cmd.Parameters.Add(param);
    DbParameter param2 = cmd.CreateParameter();
    param2.ParameterName = "p2";
    param2.DbType = DbType.Binary;
    param2.Value = new byte[][] { b1, b2 };
    cmd.Parameters.Add(param2);
    conn.Open(); cmd.ExecuteNonQuery(); conn.Close();But this second code doesn't work, the second array of byte is truncated to 4byte. It seems to be an int16 overtaking.
    When used a DbTYpe.Binary, oracle use an OracleDbType.Raw for mapping and not an OracleDbType.Blob, so the problem seems to be with raw type, BUT if we use the same code without bulk insert, it's worked !!! The problem is somewhere else...
    Why used an DbConnection ? To be able to switch easy to an another database type.
    So why used "((OracleCommand)cmd).ArrayBindCount" ? To be able to used specific functionality of each database.
    I can fix the issue when casting DbParameter as OracleParameter and fix the OracleDbType to Blob, but why second code does not working with bulk and working with simple query ?

    BCP and BULK INSERT does not work the way you expect them do. What they do is that they consume fields in a round-robin fashion. That is, they first looks for data for the first field, then for the second field and so on.
    So in your case, they will first read one byte, then 20 bytes etc until they have read the two bytes for field 122. At this point they will consume bytes until they have found a sequence of carriage return and line feed.
    You say that some records in the file are incomplete. Say that there are only 60 fields in this file. Field 61 is four bytes. BCP and BULK INSERT will now read data for field 61 as CR+LF+the first two bytes in the next row. CR+LF has no special meaning,
    but they are just data at this point.
    You will have to write a program to parse the file, or use SSIS. But BCP and BULK INSERT are not your friends in this case.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • DB-Library BCP issue with SQL Server 2014

    Hello,
    My application uses DB-Library (ntwdblib.dll) to communicate with SQL Server 2014. We are having issues calling bcp related functions like bcp_init which always returns FAIL status. Our application was working fine with SQL Server 2012 and all other previous
    SQL releases. But we are getting below error in SQL 2014,
    msg 4801, state 1, severity 16 : Insert bulk is not supported over this access protocol.
    Can anyone please help ?
    Thanks,
    Himanshu

    Hi Himanshu_kus,
    Is this still an issue, or can we close the thread?
    [Personal Site] [Blog] [Facebook]

  • [Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line

    Introduction
    Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
    Solution
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
    as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line,
    bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
    For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
    bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
    The txt file as follows:
    However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    The txt file as follows:
    When multiple field terminators means multiple fields, you use the below comma separated format,
    column1,,column2,,,column3
    In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
    Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
     WITH (
    DATAFILETYPE = ' char/native/ widechar /widenative',
     FIELDTERMINATOR = ' field_terminator',
    For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,:’,
    The new table contains like as follows:  
    We could not declare multiple field terminators (, and :) in the Query statement,  as the following format, a duplicate error will occur.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,’,
    FIELDTERMINATOR = ‘:’
    However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.  
    More Information
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
    Applies to
    SQL Server 2012
    SQL Server 2008R2
    SQL Server 2005
    SQL Server 2000
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue

    Oracle 10.2.0.4
    I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
    Here's what I'm doing:
    1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
    2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
    Here's the "test" procedure I created to test my scenario:
    View: V_TEST_TABLE
    Underlying Table: TEST_TABLE
    PROCEDURE BulkTest
    IS
    TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
    varRemData remDataType;
    begin
    select /*+ DRIVING_SITE(r)*/ *
    BULK COLLECT INTO varRemData
    from TEST_TABLE@REMOTE_LINK
    where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
    and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
    BEGIN
    FORALL idx IN varRemData.FIRST .. varRemData.LAST
    INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
    EXCEPTION WHEN others THEN
    DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
    END;
    COMMIT;
    end;
    I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
    Any help would be appreciated....
    Thanks,
    Steve

    Thanks, obviously this is my first post, I'm desperate to figure out why this won't work....
    This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
    Thanks,
    Steve

  • Bulk insert task issue

    I Have table,It contains 4 millions records,I want load data into Sql Server table using Bulk Insert task.
    How can i load data using Bulk Insert task.Bulk insert task supports only text source.
    Thanks in Advance.

    If its a sql server table to table transfer You can use data flow task with OLEDB Source and destination. In the OLEDB destination use
    table or view - fast load option as the data access mode. 
    Also if databases are in same server you can even use Execute SQL task with statement like
    INSERT INTO DestTable
    SELECT *
    FROM SourceDB.dbo.SourceTable
    which will be set based
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Strange issue creating a BCP xml format file with double dagger '‡' (alt + 0135) as column terminator with bcp.exe

    Hi,
    I'm having issues generating a BCP XML format file using a fairly unusual column terminator, a double dagger symbol
    ‡ (alt + 0135) which I need to support.
    I'm experiencing this problem with bcp.exe for SQL2008 R2 and SQL2012.
    If I run the following command line:
    bcp MyDB.TMP.Test_Extract format nul -c -x -f "C:\BCP\format_file_dagger_test.xml" -T -S localhost\SQL2012 -t‡
    I end up with a XML format file like so:
    <?xml version="1.0"?>
    <BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
     <RECORD>
      <FIELD ID="1" xsi:type="CharTerm" TERMINATOR="ç" MAX_LENGTH="255" COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
      <FIELD ID="2" xsi:type="CharTerm" TERMINATOR="ç" MAX_LENGTH="50" 
    .. and so on.
    You will notice that the TERMINATOR="ç" (Minuscule c-cedilla) is output instead of TERMINATOR="‡". The
    ç character, is strangely enough is alt + 135 (and not alt + 0135) so this might more than a coincidence! I know you can specify the codepage but this switch applies the data being imported or extracted and not for the format file
    itself (I tried it anyway). 
    In order to use the XML file to bulk import I manually did a text substitution of 'ç' character for '‡' and then BCP imports '‡' data fine. 
    This character swap doesn't occur if I generate a non XML format file (the '‡' character is output in the format file correctly) however, this file produces other import errors, which I don't encounter if I use a standard delimiter like a comma. So I have stuck
    with the working XML format file which I prefer.
    Does anyone know why this is happening? I'm planning to automate the generation the of the XML format file and would like to avoid the additional step of text substitution if possible.
    Thank you.

    Hi Ham09,
    According to your description , we do a test and find that the character of the terminator is changed due to the code page of Operation System. When you choose the different time zone in Data and Time bar, and do the same bcp test, you will find it will
    export the different TERMINATOR in your XML format file. For example, you can import the character "ç" (alt + 135) in (UTC-12:00)International Date Line West time zone and (UTC+09:00)Osaka, Sapporo, Tokyo time zone, and check if the terminators are different.
    By default, the field terminator is the tab character (represented as \t). To represent a paragraph mark, use \r\n.
    For more information, there is detail about code page(Windows), you can review the following article.
    http://msdn.microsoft.com/en-us/library/windows/desktop/dd317752(v=vs.85).aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • BCP-style bulk insert from remote C++ ODBC Native client application

    I am trying to find documentation or sample code for performing bulk inserts into SQL Server 2012 from a remote client using the ODBC native client driver from Linux.  We currently perform INSERT statements on blocks of data, wrapping it in BEGIN/COMMIT,
    and achieving through approximately half of bcp reading from a delimited text file.  While there are many web pages talking about bulk inserts via the native driver, this page (http://technet.microsoft.com/en-us/library/ms130792.aspx) seems closest to
    what I'm after but doesn't go into any detail or give API calls.  The referenced header file is just a bunch of options and constants, so presumablyone gains access to bulk functions via the standard ODBC mechanism, the question is how.
    For clarity, I am NOT interested in:
    BULK INSERT: because it requires a server-side data file or a UNC path with appropriate permissions (doesn't work from Linux)
    INSERT ... SELECT
    * FROM OPENROWSET(BULK...): same problem as above
    IRowsetFastload: OLEDB, but I need ODBC on Linux.
    Basically, I want to emulate BCP.  I don't want to *run* BCP because it requires landing data to disk. 
    Thanks
    john
    John Lilley Chief Architect RedPoint Global Inc.

    Other than block inserts within BEGIN/COMMIT transaction blocks or running bcp, is there anything else that can be done on Linux?
    No other option from Linux that I am aware of.  The SQL Server Native Client ODBC driver also supports table-valued-parameters, which can be used to stream data but the Linux ODBC driver API doesn't have a way to do that either.  That said, I would
    still expect file-based BCP to significantly outperform inserts with large batches.  I've seen a rate of 100K/sec. with this technique, including the file create overhead but much depends on the particulars of your use case.
    Consider voting for this on Connect.  BCP is on the roadmap but no date yet: 
    https://connect.microsoft.com/SQLServer/SearchResults.aspx?SearchQuery=linux+odbc+bcp
    Also, I filed a Connect item for TVP support:
    https://connect.microsoft.com/SQLServer/feedback/details/874616/add-tvp-support-to-sql-server-odbc-driver-for-linux
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Multiple field terminators in BULK INSERT or BCP

    Can i have multiple field terminators  in BULK INSERT or BCP commands i,e, define more than 1 for a file?
    Pls provide example.

    Hi stellios,
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator and the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted as
    a terminator, not as data, and the data after that character is interpreted as belonging to the next field or record. I do a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line:
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t ,: -r \n –T
    ---you can use the two characters as one terminators, it can do well
    --if you want to bcp by using multiple field terminators just like the following command, it will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    If you specify terminator for  BULK import. You can only set one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
       WITH (
    DATAFILETYPE = 'char',
    FIELDTERMINATOR = ',',
    ROWTERMINATOR = '\n'
    GO
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

Maybe you are looking for