Bug in Bulk Insert?

So, I'm working with a client today and we discovered some missing records from his file.  I looked into it a bit; the first 5 rows were being truncated.  I thought that is bizarre because the data starts on row 7.  As such, I set up my Bulk
Insert like this.
declare @fullpath1 varchar(1000)
select @fullpath1 = '''\\london-sql\FTP\' + convert(varchar, getdate()- @intFlag , 112) + '_SPGT.SPL'''
declare @cmd1 nvarchar(1000)
select @cmd1 = 'bulk insert [dbo].[SPGT_Daily] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 7, ROWTERMINATOR=''0x0a'')'
exec (@cmd1)
So, I open this file, which comes from a Unix system, and I get this.
So, this stock data actually starts on ROW2, not ROW5.  We figured it out pretty quick, and we're all set now.  I'm just not sure why Excel, Wordpad, Notepad, etc, would all show the data starting on ROW7 (field names are in ROW6), and Bulk Insert
thinks the data is in ROW2 (field names are in ROW1).
This seems very strange.
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

BULK INSERT is a tool to read binary files. In difference to Notepad etc, it is not predisposed towards "lines", but is completely agnostic to the matter. You tell BULK INSERT to insert data into a six-column table with tab as field delimiter and
\n ias rowterminator. BULK INSERT starts reading bytes until it finds a tab. First field, check. Continues reading bytes until the next tab. Check. And when it comes to the sixth field it read bytes until you see a newline. Check. If it would happen to see
a newline while looking for a tab, that is just another byte of the data.
If you apply this way of thinking, you will find that BULK INSERT considered the first six line to be a single record. (Which it unfortunately calls a row.)
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • Number of rows inserted is different in bulk insert using select statement

    I am facing a problem in bulk insert using SELECT statement.
    My sql statement is like below.
    strQuery :='INSERT INTO TAB3
    (SELECT t1.c1,t2.c2
    FROM TAB1 t1, TAB2 t2
    WHERE t1.c1 = t2.c1
    AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
    EXECUTE IMMEDIATE strQuery ;
    These SQL statements are inside a procedure. And this procedure is called from C#.
    The number of rows returned by the "SELECT" query is 70.
    On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
    But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
    And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
    On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
    Anybody faced these kind of issues?
    Can anyone tell what would be the reason of this issue..? any other work around for this...?
    I am using Oracle 10g R2 version.
    Edited by: user13339527 on Jun 29, 2010 3:55 AM
    Edited by: user13339527 on Jun 29, 2010 3:56 AM

    You have very likely concurrent transactions on the database:
    >
    By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
    >
    If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
    Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
    You can try to run your test with:
    set  transaction isolation level  serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
    like:
    wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46

  • Unique constraint violation error while bulk insert (TimesTen 7.0.1.0.0)

    Hi,
    I try to understand while I get error above when doing bulk inserts via TimesTen into Oracle database.
    The single inserts after that works perfect (no record will dropped), when doing direct to oracle bulk insert works with same data perfect. We tried it with AWT, then with readonly tables and passtrough=2, nothing works. The error comes even the table is empty.
    Any suggestions? is this a bug in timesten itself? Have I connect for bulk inserts direct to oracle database?
    Thanks in advanced
    Rajko Albrecht

    It may be a TT bug but if so it is not an obvious one since bulk inserts definitely work in TimesTen...
    Can you please provide:
    1. The schema of the table in question (including any indices)
    2. Details on how you are doing the bulk inserts (C/ODBC program, Java/JDBC program or ...). Actual program source code would be helpful.
    3. A (small) example of the data that you know would give this error.
    Thanks,
    Chris

  • How can I debug a Bulk Insert error?

    I'm loading a bunch of files into SQL server.  All work fine, but one keeps erroring out on me.  All files should be exactly the same in structure, but they have different dates, and other different financial metrics, but the structure and field
    names should be exactly the same.  Nevertheless, one keeps konking out, and throwing this error.
    Msg 4832, Level 16, State 1, Line 1
    Bulk load: An unexpected end of file was encountered in the data file.
    Msg 7399, Level 16, State 1, Line 1
    The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
    Msg 7330, Level 16, State 2, Line 1
    Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
    The ROWTERMINATOR should be CRLF, and when you look at it in Notepad++ that's what it looks like, but it must be something else, because I keep getting errors here.  I tried the good old:  ROWTERMINATOR='0x0a'
    That works on all files, but one, so there's something funky going on here, and I need to see what SQL Server is really doing.
    Is there some way to print out a log, or look at a log somewhere?
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    The first thing to try is to see if BCP likes the file. BCP and BULK INSERT adhere to the same spec, but they are different implementations, but there are subtle differences.
    There is an ERRORFILE option, but it more helps when there is bad data.
    You can also use the BATCHSIZE option to see how many records in the file it swallows, before things go bad. FIRSTROW and LASTROW can also help.
    All in all, it can be quite tedious find that single row where things are different - and where BULK INSERT loses sync entirely. Keep in mind that it reads fields on by one, and it there is one field terminator to few on a line, it will consume the line
    feed at the end of the line as data.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • How to get current month from filename and bulk insert from text file into table?

    I set up some dynamic SQL to help my bulk copy data from a text file to a table.  This works fine for files that come in every day; I get the previous day’s data, based on the file name that’s placed
    in the folder.  That’s why I’m using the ‘-1’.  The dates will look like this: '20140131', so I'm using type 112.
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\system.local\ms\london\FTP\' + convert(varchar, getdate()-1, 112) + '_INDEXPRICES_EOM.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SB_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    I think the syntax will be somewhat similar to this:
    YEAR(date_column)=YEAR(getdate()) AND MONTH(date_column)=MONTH(getdate())
    I’m not totally sure how to incorporate that into my current syntax.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    I tried a couple versions of this.
    Declare @StartDate Date, @EndDate Date
    Select @StartDate = convert(varchar, getdate()-28, 112), @EndDate = convert(varchar, getdate()-1, 112)
    BEGIN
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\ms\london\FTP\' + ''' between ''' + Convert(Varchar(10), @StartDate, 101) + ''' and ''' + Convert(Varchar(10), @EndDate, 101) + '''_SP.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SPBMI_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    END
    Here’s the string:
    bulk insert [dbo].[SPBMI_Monthly] from '\\ms\london\FTP\' between '02/03/2014' and '03/02/2014'_SP.SPC' with (FIELDTERMINATOR = '\t', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR='0x0a')
    The error message I keep getting is:
    Msg 156, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'between'.
    Msg 319, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
    I feel like I’m already pushing this thing to the limit. 
    Maybe this last part isn’t possible.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • SSIS BULK INSERT unsing UNC inside of ForEach Loop Container Failed could not be opened. Operating system error code 5(Access is denied.)

    Hi,
    I am trying to figure out how to fix my problem
    Error: Could not be opened. Operating system error code 5(Access is denied.)
    Process Description:
    Target Database Server Reside on different Server in the Network
    SSIS Package runs from a Remote Server
    SSIS Package use a ForEachLoop Container to loop into a directory to do Bulk Insert
    SSIS Package use variables to specified the share location of the files using UNC like this
    \\server\files
    Database Service accounts under the Database is runing it has full permission on the share drive were the files reside.
    In the Execution Results tab shows the prepare SQL statement for the BULK insert and I can run the same exact the bulk insert in SSMS without errors, from the Database Server and from the server were SSIS package is executed.
    I am on a dead end and I don’t want to re-write SSIS to use Data Flow Task because is not flexible to update when metadata of the table changed.
    Below post it has almost the same situation:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/8de13e74-709a-43a5-8be2-034b764ca44f/problem-with-bulk-insert-task-in-foreach-loop?forum=sqlintegrationservices

    Insteresting how I fixed the issue, Adding the Application Name into the SQL OLAP Connection String Fixed the issue. I am not sure why SQL Server wasn't able to open the file remotely without this.

  • [Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line

    Introduction
    Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
    Solution
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
    as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line,
    bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
    For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
    bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
    The txt file as follows:
    However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    The txt file as follows:
    When multiple field terminators means multiple fields, you use the below comma separated format,
    column1,,column2,,,column3
    In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
    Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
     WITH (
    DATAFILETYPE = ' char/native/ widechar /widenative',
     FIELDTERMINATOR = ' field_terminator',
    For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,:’,
    The new table contains like as follows:  
    We could not declare multiple field terminators (, and :) in the Query statement,  as the following format, a duplicate error will occur.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,’,
    FIELDTERMINATOR = ‘:’
    However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.  
    More Information
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
    Applies to
    SQL Server 2012
    SQL Server 2008R2
    SQL Server 2005
    SQL Server 2000
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • Bulk Insert into a Table from CSV file

    I have a CSV file with 1000 records and i have to insert those records into a table.
    I tried for Bulk Insert command and Load data infile command but it throws error.
    Am using Oracle 10g Express Edition.
    I want to achieve it thru query command and not by plsql procedures
    Please send me query syntax for this problem. . . .
    Thanks in Advance,
    Hariharan ST.

    Hi
    If you create an external table that points to your csv file you will then be able populate your table from a query.
    See: http://www.astral-consultancy.co.uk/cgi-bin/hunbug/doco.cgi?11210
    Hope this helps

  • Error while running bulk insert in SSIS package

    Hi:
    I have an error when I am running bulk insert in SSIS package.
    I have implemented an SSIS package to update master data directly from R/3, R/3 gives the file in a specified format, I take this and insert all the records into a temporary table and then update mbr table and process the dimension.
    This works perfectly well in our development system where both our app server and sql server on the same box. But in QAS, the 2 servers are separate and when I try to run the SSIS package I get the below error.
    We have tested all connections and are able to access the path and file from both app server and sql server using the shared folder. Our basis team says that it is a problem with bulk insert task and nothing to do with any authorization.
    Has anyone experienced with this sort of problem in multi server environment? Is there another way to load all data from a file into bespoke table without using bulk insert.
    Thanks,
    Subramania
    Error----
    SSIS package "Package.dtsx" starting.
    Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
    msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
    Task failed: Insert Data Into Staging Table (Account)
    SSIS package "Package.dtsx" finished: Success.
    The program '[2496] Package.dtsx: DTS' has exited with code 0 (0x0).

    Hi Subramania
    From your error:
    Error: 0xC002F304 at Insert Data Into Staging Table (Account), Bulk Insert Task: An error occurred with the following error message: "Cannot bulk load because the file "
    msapbpcapq01\dim\entity.csv" could not be opened. Operating system error code 5(Access is denied.).".
    Let say, server A is where the file entity.csv is located
    Please check the Event Viewer->Security of Server A at the time when the SSIS run, there must be an entry with Logon Failure and find what user was used to access the shared path.
    If your both servers are not in a domain, create the user in server A with the same name and password and grant read access to the shared folder.
    The other workaround is grant read access to Everybody on the shared folder.
    Halomoan
    Edited by: Halomoan Zhou on Oct 6, 2008 4:23 AM

  • Bulk inserts on Solaris slow as compared to windows

    Hi Experts,
    Looking for tips in troubleshooting 'Bulk inserts on Solaris'. I have observed the same bulk inserts are quite fast on Windows as compared to Solaris. Is there known issues on Solaris?
    This is the statement:
    I have 'merge...insert...' query which is in execution since long time more than 12 hours now:
    merge into A DEST using (select * from B SRC) SRC on (SRC.some_ID= DEST.some_ID) when matched then update ...when not matched then insert (...) values (...)Table A has 600K rows with unique identifier some_ID column, Table B has 500K rows with same some_id column, the 'merge...insert' checks if the some_ID exists, if yes then update query gets fired, when not matched then insert query gets fired. In either case it takes long time to execute.
    Environment:
    The version of the database is 10g Standard 10.2.0.3.0 - 64bit Production
    OS: Solaris 10, SPARC-Enterprise-T5120
    These are the parameters relevant to the optimizer:
    SQL>
    SQL> show parameter sga_target
    NAME                                 TYPE                VALUE
    sga_target                           big integer           4G
    SQL>
    SQL> show parameter sga_target
    NAME                                 TYPE                 VALUE
    sga_target                          big integer           4G
    SQL>
    SQL>  show parameter optimizer
    NAME                                        TYPE        VALUE
    optimizer_dynamic_sampling       integer        2
    optimizer_features_enable          string         10.2.0.3
    optimizer_index_caching             integer        0
    optimizer_index_cost_adj            integer       100
    optimizer_mode                          string         ALL_ROWS
    optimizer_secure_view_merging   boolean     TRUE
    SQL>
    SQL> show parameter db_file_multi
    NAME                                             YPE        VALUE
    db_file_multiblock_read_count        integer     16
    SQL>
    SQL> show parameter db_block_size
    NAME                                        TYPE        VALUE
    db_block_size                           integer     8192
    SQL>
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE
    cursor_sharing                    string      EXACT
    SQL>
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL>
    SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                PNAME                     PVAL1               PVAL2
    SYSSTATS_INFO        STATUS                                    COMPLETED
    SYSSTATS_INFO        DSTART                                    07-12-2005 07:13
    SYSSTATS_INFO        DSTOP                                      07-12-2005 07:13
    SYSSTATS_INFO        FLAGS                  1
    SYSSTATS_MAIN        CPUSPEEDNW       452.727273
    SYSSTATS_MAIN        IOSEEKTIM           10
    SYSSTATS_MAIN        IOTFRSPEED         4096
    SYSSTATS_MAIN        SREADTIM
    SYSSTATS_MAIN        MREADTIM
    SYSSTATS_MAIN        CPUSPEED
    SYSSTATS_MAIN        MBRC
    SYSSTATS_MAIN        MAXTHR
    SYSSTATS_MAIN        SLAVETHR
    13 rows selected.
    Following is the error messages being pushed into oracle alert log file:
    Thu Dec 10 01:41:13 2009
    Thread 1 advanced to log sequence 1991
      Current log# 1 seq# 1991 mem# 0: /oracle/oradata/orainstance/redo01.log
    Thu Dec 10 04:51:01 2009
    Thread 1 advanced to log sequence 1992
      Current log# 2 seq# 1992 mem# 0: /oracle/oradata/orainstance/redo02.logPlease provide some tips to troubleshoot the actual issue. Any pointers on db_block_size,SGA,PGA which are the reasons for this failure?
    Regards,
    neuron

    SID, SEQ#,           EVENT,          WAIT_CLASS_ID,     WAIT_CLASS#, WAIT_TIME, SECONDS_IN_WAIT,      STATE
    125   24235    'db file sequential read'           1740759767                         8                -1               *58608     *   'WAITED SHORT TIME'Regarding the disk, I am not sure what needs to be checked, however from output of iostat it does not seem to be busy, check last three row's and %b column is negligible:
    tty         cpu
    tin tout  us sy wt id
       0  320   3  0  0 97
                        extended device statistics
        r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 ramdisk1
        0.0    2.5    0.0   18.0  0.0  0.0    0.0    8.3   0   1 c1t0d0
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue

    Oracle 10.2.0.4
    I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
    Here's what I'm doing:
    1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
    2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
    Here's the "test" procedure I created to test my scenario:
    View: V_TEST_TABLE
    Underlying Table: TEST_TABLE
    PROCEDURE BulkTest
    IS
    TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
    varRemData remDataType;
    begin
    select /*+ DRIVING_SITE(r)*/ *
    BULK COLLECT INTO varRemData
    from TEST_TABLE@REMOTE_LINK
    where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
    and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
    BEGIN
    FORALL idx IN varRemData.FIRST .. varRemData.LAST
    INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
    EXCEPTION WHEN others THEN
    DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
    END;
    COMMIT;
    end;
    I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
    Any help would be appreciated....
    Thanks,
    Steve

    Thanks, obviously this is my first post, I'm desperate to figure out why this won't work....
    This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
    Thanks,
    Steve

  • Is there a way to dynamically figure out the FIRSTROW & LASTROW using Bulk Insert?

    I noticed most of my files follow a certain convention, so I can use Bulk Insert with this:
    FIRSTROW = 2, LASTROW = 224
    However a few files have many more records.  I'd like SQL Server to dynamically figure out the first and last rows with data, but I don't know if that's possible.  Can someone confirm?
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Nope, you would need read the files from some program. Could be a CLR stored procedure if you want to do it from SQL Server. But from BULK INSERT alone, no.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for

  • Is there a Macintosh OS 8.6 or 9.0.4 printer driver for an HP PSC 2355?

    Dear Ladies and Gentlemen: I know this is very old {don't laugh}, but I was looking for a Macintosh OS 8.6 or 9.0.4 printer driver for my recently purchased HP PSC 2355 All-in-One printer. I run OS X 10.5.8 on an Apple iBook G4 and also run a Mac OS

  • Syntax Error while adding TrustedHosts

    I have a machine that has Windows Server 2012 R2 installed on it. I just installed RSAT on my ThinkPad Tablet which runs Windows 8.1. I'm trying to add my server in the client's trusted host list with - winrm set winrm/config/client @{TrustedHosts="M

  • Can't get flash buttons to work...'Cannot find path' Flash MX 2004

    I am trying to get the flash buttons for my website to work, but every time I do the getURL thing it keeps saying 'Cannot find (directory)'. The website is offline and I'm using Macromedia Flash MX 2004, I also might be able to use Flash 8 but these

  • 10.1.3 EA: How do I get some of these new features to work?

    I've read throughthis : Oracle JDeveloper 10.1.3 EA1 New Features http://www.oracle.com/technology/products/jdev/101/collateral/101/1013eanewfeatures.html#Code%20Assist I can't figure out how to get the following to work (I've searched the help and t

  • Bytes paged out Logical standby (urgent!!!!!)

    Hi, We are having  a 3 node  LOGICAL STANDBY RAC ,it is continuoulsly  paging out bytes to disk. SQl > select from v$logstdby_stats;* NAME     VALUE number of preparers     2 number of appliers     27 maximum SGA for LCR cache     3072 parallel serve