MSSQL Bulk Insert

Hi
Using ODI 11.1.1.6.0
Trying to load a .txt file using the LKM File to MSSQL (BULK) KM.
The file loads to staging, but with two main issues:
1) The text delimited fields load to SQL surrounded by double quotes (However, when I right click the model to preview data, double quotes are removed).
2) Shorter records seem to go to the next line in the source file to populate empty fields. It's as if there's a problem with the record delimiter.
Can someone help with either of these issues?
Cheers

#1 Can you check with MSSQL client if the text delimiter are indeed loaded in the DB column?
#2 This does sound like the record separator problem.

Similar Messages

  • SQL to MSSQL (Bulk) Variable Error

    Hello everybody,
    I created a mapping / interface, which takes a date variable. It is working with LKM SQL to SQL or LKM SQL to Multiconnect knowledge modules successfully. However, when I run it with LKM to MSSQL BULK knowledge module, it is failed. The error code says 'Missing parameter ...'
    What can be the cause of this problem?
    Please help me.
    Thank You

    When you define the delimiter for a field, the delimiter must be a character sequence that cannot appear in the data for that field. That is, there is no problems having line breaks in the data - as long as it is not in the last field. On the other the hand,
    this is the only field you have can have a pipe character.
    So you either need to change the order of the fields, or add a different terminator.
    Since I don't know MySQL, I don't know exactly what you can do with SELECT INTO OUTFILE, but an obvious possibiliy is:
    SELECT *, '*' FROM test   INTO OUTFILE '/tmp/database/test.csv' FIELDS TERMINATED BY '|';
    Then you modify the BULK INSERT statement as:
    BULK INSERT dbo.TEST FROM 'C:\csv\TEST.csv' WITH (DATAFILETYPE = 'char', BATCHSIZE=100,FIELDTERMINATOR='|',ROWTERMINATOR='|*\n');
    Now, hopefully the \n is not interpreted as \r\n here.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SQL2K, XML Bulk Insert woes. Please Help

    I have Win2k3 server with latest sp's. I have mssql 2k
    enterprise with SP4. I have installed sqlxml3.0 sp 3. I've tried
    the examples in the documentation but cannot get this to work. I
    really need to be able to do a bulk insert using the xml file below
    through a coldfusion query. I saw an example online how to do it
    with CF and followed the example to the letter and get this error:
    Could not bulk insert. Unknown version of format file
    'D:\Websites\globalFunctions]getEventPerformersMapping.xml'.
    My XML file ("getEventPerformers.txt") looks like this:
    <ROOT>
    <EventPerformer>
    <EventID>448410</EventID>
    <PerformerID>3160</PerformerID>
    <PerformerName>Lion King</PerformerName>
    </EventPerformer>
    <EventPerformer>
    <EventID>448412</EventID>
    <PerformerID>3160</PerformerID>
    <PerformerName>Lion King</PerformerName>
    </EventPerformer>
    </ROOT>
    The format file I'm using (that I got from the online
    example) is:
    <?xml version="1.0" ?>
    <Schema xmlns="urn:schemas-microsoft-com:xml-data"
    xmlns:dt="urn:schemas-microsoft-com:xml:datatypes"
    xmlns:sql="urn:schemas-microsoft-com:xml-sql" >
    <ElementType name="EventID" dt:type="i8" />
    <ElementType name="PerformerID" dt:type="int" />
    <ElementType name="PerformerName" dt:type="string" />
    <ElementType name="ROOT" sql:is-constant="1">
    <element type="EventPerformer" />
    </ElementType>
    <ElementType name="EventPerformer"
    sql:relation="tbl_EventPerformers">
    <element type="EventID" sql:field="EventID" />
    <element type="PerformerID" sql:field="PerformerID" />
    <element type="PerformerName" sql:field="PerformerName"
    />
    </ElementType>
    </Schema>
    My db table structure is this:
    EventID - bigint - Primary Key
    PerformerID - int
    PerformerName varchar(100)
    In the msdn docs there is an example that uses VB script
    create a SQLXMLBulkLoad.SQLXMLBulkload.3.0 object, but I couldn't
    get that to work either.
    Someone please help!

    I've got the same problem:
    We're porting from Orion to Tomcat and the following erro occurs at the following line:
         response.sendRedirect("asistente/frmresultat.jsp?ant=false");

  • How can I debug a Bulk Insert error?

    I'm loading a bunch of files into SQL server.  All work fine, but one keeps erroring out on me.  All files should be exactly the same in structure, but they have different dates, and other different financial metrics, but the structure and field
    names should be exactly the same.  Nevertheless, one keeps konking out, and throwing this error.
    Msg 4832, Level 16, State 1, Line 1
    Bulk load: An unexpected end of file was encountered in the data file.
    Msg 7399, Level 16, State 1, Line 1
    The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
    Msg 7330, Level 16, State 2, Line 1
    Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
    The ROWTERMINATOR should be CRLF, and when you look at it in Notepad++ that's what it looks like, but it must be something else, because I keep getting errors here.  I tried the good old:  ROWTERMINATOR='0x0a'
    That works on all files, but one, so there's something funky going on here, and I need to see what SQL Server is really doing.
    Is there some way to print out a log, or look at a log somewhere?
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    The first thing to try is to see if BCP likes the file. BCP and BULK INSERT adhere to the same spec, but they are different implementations, but there are subtle differences.
    There is an ERRORFILE option, but it more helps when there is bad data.
    You can also use the BATCHSIZE option to see how many records in the file it swallows, before things go bad. FIRSTROW and LASTROW can also help.
    All in all, it can be quite tedious find that single row where things are different - and where BULK INSERT loses sync entirely. Keep in mind that it reads fields on by one, and it there is one field terminator to few on a line, it will consume the line
    feed at the end of the line as data.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • How to get current month from filename and bulk insert from text file into table?

    I set up some dynamic SQL to help my bulk copy data from a text file to a table.  This works fine for files that come in every day; I get the previous day’s data, based on the file name that’s placed
    in the folder.  That’s why I’m using the ‘-1’.  The dates will look like this: '20140131', so I'm using type 112.
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\system.local\ms\london\FTP\' + convert(varchar, getdate()-1, 112) + '_INDEXPRICES_EOM.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SB_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    I think the syntax will be somewhat similar to this:
    YEAR(date_column)=YEAR(getdate()) AND MONTH(date_column)=MONTH(getdate())
    I’m not totally sure how to incorporate that into my current syntax.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    I tried a couple versions of this.
    Declare @StartDate Date, @EndDate Date
    Select @StartDate = convert(varchar, getdate()-28, 112), @EndDate = convert(varchar, getdate()-1, 112)
    BEGIN
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\ms\london\FTP\' + ''' between ''' + Convert(Varchar(10), @StartDate, 101) + ''' and ''' + Convert(Varchar(10), @EndDate, 101) + '''_SP.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SPBMI_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    END
    Here’s the string:
    bulk insert [dbo].[SPBMI_Monthly] from '\\ms\london\FTP\' between '02/03/2014' and '03/02/2014'_SP.SPC' with (FIELDTERMINATOR = '\t', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR='0x0a')
    The error message I keep getting is:
    Msg 156, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'between'.
    Msg 319, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
    I feel like I’m already pushing this thing to the limit. 
    Maybe this last part isn’t possible.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • SSIS BULK INSERT unsing UNC inside of ForEach Loop Container Failed could not be opened. Operating system error code 5(Access is denied.)

    Hi,
    I am trying to figure out how to fix my problem
    Error: Could not be opened. Operating system error code 5(Access is denied.)
    Process Description:
    Target Database Server Reside on different Server in the Network
    SSIS Package runs from a Remote Server
    SSIS Package use a ForEachLoop Container to loop into a directory to do Bulk Insert
    SSIS Package use variables to specified the share location of the files using UNC like this
    \\server\files
    Database Service accounts under the Database is runing it has full permission on the share drive were the files reside.
    In the Execution Results tab shows the prepare SQL statement for the BULK insert and I can run the same exact the bulk insert in SSMS without errors, from the Database Server and from the server were SSIS package is executed.
    I am on a dead end and I don’t want to re-write SSIS to use Data Flow Task because is not flexible to update when metadata of the table changed.
    Below post it has almost the same situation:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/8de13e74-709a-43a5-8be2-034b764ca44f/problem-with-bulk-insert-task-in-foreach-loop?forum=sqlintegrationservices

    Insteresting how I fixed the issue, Adding the Application Name into the SQL OLAP Connection String Fixed the issue. I am not sure why SQL Server wasn't able to open the file remotely without this.

  • [Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line

    Introduction
    Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
    Solution
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
    as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line,
    bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
    For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
    bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
    The txt file as follows:
    However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    The txt file as follows:
    When multiple field terminators means multiple fields, you use the below comma separated format,
    column1,,column2,,,column3
    In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
    Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
     WITH (
    DATAFILETYPE = ' char/native/ widechar /widenative',
     FIELDTERMINATOR = ' field_terminator',
    For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,:’,
    The new table contains like as follows:  
    We could not declare multiple field terminators (, and :) in the Query statement,  as the following format, a duplicate error will occur.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,’,
    FIELDTERMINATOR = ‘:’
    However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.  
    More Information
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
    Applies to
    SQL Server 2012
    SQL Server 2008R2
    SQL Server 2005
    SQL Server 2000
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • Bulk Insert into a Table from CSV file

    I have a CSV file with 1000 records and i have to insert those records into a table.
    I tried for Bulk Insert command and Load data infile command but it throws error.
    Am using Oracle 10g Express Edition.
    I want to achieve it thru query command and not by plsql procedures
    Please send me query syntax for this problem. . . .
    Thanks in Advance,
    Hariharan ST.

    Hi
    If you create an external table that points to your csv file you will then be able populate your table from a query.
    See: http://www.astral-consultancy.co.uk/cgi-bin/hunbug/doco.cgi?11210
    Hope this helps

  • Error while running bulk insert in SSIS package

    Hi:
    I have an error when I am running bulk insert in SSIS package.
    I have implemented an SSIS package to update master data directly from R/3, R/3 gives the file in a specified format, I take this and insert all the records into a temporary table and then update mbr table and process the dimension.
    This works perfectly well in our development system where both our app server and sql server on the same box. But in QAS, the 2 servers are separate and when I try to run the SSIS package I get the below error.
    We have tested all connections and are able to access the path and file from both app server and sql server using the shared folder. Our basis team says that it is a problem with bulk insert task and nothing to do with any authorization.
    Has anyone experienced with this sort of problem in multi server environment? Is there another way to load all data from a file into bespoke table without using bulk insert.
    Thanks,
    Subramania
    Error----
    SSIS package "Package.dtsx" starting.
    Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
    msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
    Task failed: Insert Data Into Staging Table (Account)
    SSIS package "Package.dtsx" finished: Success.
    The program '[2496] Package.dtsx: DTS' has exited with code 0 (0x0).

    Hi Subramania
    From your error:
    Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
    msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
    Let say, server A is where the file entity.csv is located
    Please check the Event Viewer->Security of Server A at the time when the SSIS run, there must be an entry with Logon Failure and find what user was used to access the shared path.
    If your both servers are not in a domain, create the user in server A with the same name and password and grant read access to the shared folder.
    The other workaround is grant read access to Everybody on the shared folder.
    Halomoan
    Edited by: Halomoan Zhou on Oct 6, 2008 4:23 AM

  • Bulk inserts on Solaris slow as compared to windows

    Hi Experts,
    Looking for tips in troubleshooting 'Bulk inserts on Solaris'. I have observed the same bulk inserts are quite fast on Windows as compared to Solaris. Is there known issues on Solaris?
    This is the statement:
    I have 'merge...insert...' query which is in execution since long time more than 12 hours now:
    merge into A DEST using (select * from B SRC) SRC on (SRC.some_ID= DEST.some_ID) when matched then update ...when not matched then insert (...) values (...)Table A has 600K rows with unique identifier some_ID column, Table B has 500K rows with same some_id column, the 'merge...insert' checks if the some_ID exists, if yes then update query gets fired, when not matched then insert query gets fired. In either case it takes long time to execute.
    Environment:
    The version of the database is 10g Standard 10.2.0.3.0 - 64bit Production
    OS: Solaris 10, SPARC-Enterprise-T5120
    These are the parameters relevant to the optimizer:
    SQL>
    SQL> show parameter sga_target
    NAME                                 TYPE                VALUE
    sga_target                           big integer           4G
    SQL>
    SQL> show parameter sga_target
    NAME                                 TYPE                 VALUE
    sga_target                          big integer           4G
    SQL>
    SQL>  show parameter optimizer
    NAME                                        TYPE        VALUE
    optimizer_dynamic_sampling       integer        2
    optimizer_features_enable          string         10.2.0.3
    optimizer_index_caching             integer        0
    optimizer_index_cost_adj            integer       100
    optimizer_mode                          string         ALL_ROWS
    optimizer_secure_view_merging   boolean     TRUE
    SQL>
    SQL> show parameter db_file_multi
    NAME                                             YPE        VALUE
    db_file_multiblock_read_count        integer     16
    SQL>
    SQL> show parameter db_block_size
    NAME                                        TYPE        VALUE
    db_block_size                           integer     8192
    SQL>
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE
    cursor_sharing                    string      EXACT
    SQL>
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL>
    SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                PNAME                     PVAL1               PVAL2
    SYSSTATS_INFO        STATUS                                    COMPLETED
    SYSSTATS_INFO        DSTART                                    07-12-2005 07:13
    SYSSTATS_INFO        DSTOP                                      07-12-2005 07:13
    SYSSTATS_INFO        FLAGS                  1
    SYSSTATS_MAIN        CPUSPEEDNW       452.727273
    SYSSTATS_MAIN        IOSEEKTIM           10
    SYSSTATS_MAIN        IOTFRSPEED         4096
    SYSSTATS_MAIN        SREADTIM
    SYSSTATS_MAIN        MREADTIM
    SYSSTATS_MAIN        CPUSPEED
    SYSSTATS_MAIN        MBRC
    SYSSTATS_MAIN        MAXTHR
    SYSSTATS_MAIN        SLAVETHR
    13 rows selected.
    Following is the error messages being pushed into oracle alert log file:
    Thu Dec 10 01:41:13 2009
    Thread 1 advanced to log sequence 1991
      Current log# 1 seq# 1991 mem# 0: /oracle/oradata/orainstance/redo01.log
    Thu Dec 10 04:51:01 2009
    Thread 1 advanced to log sequence 1992
      Current log# 2 seq# 1992 mem# 0: /oracle/oradata/orainstance/redo02.logPlease provide some tips to troubleshoot the actual issue. Any pointers on db_block_size,SGA,PGA which are the reasons for this failure?
    Regards,
    neuron

    SID, SEQ#,           EVENT,          WAIT_CLASS_ID,     WAIT_CLASS#, WAIT_TIME, SECONDS_IN_WAIT,      STATE
    125   24235    'db file sequential read'           1740759767                         8                -1               *58608     *   'WAITED SHORT TIME'Regarding the disk, I am not sure what needs to be checked, however from output of iostat it does not seem to be busy, check last three row's and %b column is negligible:
    tty         cpu
    tin tout  us sy wt id
       0  320   3  0  0 97
                        extended device statistics
        r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 ramdisk1
        0.0    2.5    0.0   18.0  0.0  0.0    0.0    8.3   0   1 c1t0d0
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue

    Oracle 10.2.0.4
    I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
    Here's what I'm doing:
    1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
    2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
    Here's the "test" procedure I created to test my scenario:
    View: V_TEST_TABLE
    Underlying Table: TEST_TABLE
    PROCEDURE BulkTest
    IS
    TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
    varRemData remDataType;
    begin
    select /*+ DRIVING_SITE(r)*/ *
    BULK COLLECT INTO varRemData
    from TEST_TABLE@REMOTE_LINK
    where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
    and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
    BEGIN
    FORALL idx IN varRemData.FIRST .. varRemData.LAST
    INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
    EXCEPTION WHEN others THEN
    DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
    END;
    COMMIT;
    end;
    I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
    Any help would be appreciated....
    Thanks,
    Steve

    Thanks, obviously this is my first post, I'm desperate to figure out why this won't work....
    This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
    Thanks,
    Steve

  • Is there a way to dynamically figure out the FIRSTROW & LASTROW using Bulk Insert?

    I noticed most of my files follow a certain convention, so I can use Bulk Insert with this:
    FIRSTROW = 2, LASTROW = 224
    However a few files have many more records.  I'd like SQL Server to dynamically figure out the first and last rows with data, but I don't know if that's possible.  Can someone confirm?
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Nope, you would need read the files from some program. Could be a CLR stored procedure if you want to do it from SQL Server. But from BULK INSERT alone, no.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • How can I display my iPhone screen on my Mac?

    I have an app that lets me use my iPhone and iPad as external displays for my Mac, but I want something to go the other direction.  I want to use my Mac as an external display and control for my iPhone.  I will not jailbreak the iPhone, so those solu

  • My Apple bluetooth keyboard no longer connects to my Macbook Pro

    My wireless keyboard no longer connects to my Macbook using Bluetooth. I've put in fresh batteries, checked the on/off switch, and attempted to connected using system dialog. Also tried turning Bluetooth off and on. Nothing works. Wireless trackpad w

  • Returns sales document for cancelled invoice

    Dear SAP Gurus, User has created a normal invoice with certain finished goods. Due to some reasons he cancelled this invoice with T code VF11. And then he tried to create a returns order wrt original invoice and created returns sales document without

  • Maintaining a hyperlink bookmark to another document

    Hello, I have a word document that contains hyperlinks to a bookmarked location in a different word document (2007).  The hyperlinked bookmark works great when in word.  Opens up the 2nd document and goes straight to the bookmarked location.  Once I

  • Command Key Issue

    It feels like my command key is eternally pressed, even when I disconnected my keyboard. Any idea of what this could be?