Query on Bulk Insert
I am using bulk insert to load data into some tables.What limit i should specify for commit to be done, in order to have the best possible performance.
I don't think there is any fixed sort of rule is there to identify the limit numbers.
As Morgan suggested 2500 or the mentioned number for your purpose.
Kindly check the following links ->
http://www.psoug.org/reference/array_processing.html
http://www.oracle.com/technology/oramag/oracle/08-mar/o28plsql.html
http://www.dba-oracle.com/plsql/t_plsql_limit_clause.htm
Regards.
Satyaki De.
Similar Messages
-
[Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line
Introduction
Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
Solution
For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
In Windows command line,
bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
The txt file as follows:
However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
The txt file as follows:
When multiple field terminators means multiple fields, you use the below comma separated format,
column1,,column2,,,column3
In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
USE <testdatabase>;
GO
BULK INSERT <your table> FROM ‘<Path>’
WITH (
DATAFILETYPE = ' char/native/ widechar /widenative',
FIELDTERMINATOR = ' field_terminator',
For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,:’,
The new table contains like as follows:
We could not declare multiple field terminators (, and :) in the Query statement, as the following format, a duplicate error will occur.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,’,
FIELDTERMINATOR = ‘:’
However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.
More Information
For more information about filed terminators, you can review the following article.
http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
http://technet.microsoft.com/en-us/library/ms191485.aspx
http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
Applies to
SQL Server 2012
SQL Server 2008R2
SQL Server 2005
SQL Server 2000
Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.Thanks,
Is this a supported scenario, or does it use unsupported features?
For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
in a supported way?
Thanks! Josh -
Hello, I have an extremely complicated query, that has a structure similar to:
Overall Query
---SubQueryA
-------SubQueryB
---SubQueryB
---SubQueryC
-------SubQueryA
The subqueries themselves are slow, and having to run them multiple times is much too slow! Ideally, I would be able to run each subquery once, and then use the results. I cannot use standard oracle tables, and i would need to keep the result of the subqueries in memory.
I was thinking I write a pl/sql script that did the subqueries at the beginning and stored the results in memory. Then in the overall query, I could loop through my results in memory, and join the results of the various subqueries to one another.
some questions:
-what is the best data structure to use? I've been looking around and there are nested arrays, and there's the bulk insert functionality, but I'm not sure what is the best to you
-the advantage of the method I'm suggesting is that I only have to do each subquery once. But, when I start joining the results of the subquery to one another, will I take a performance hit? will Oracle not be able to optimize the joins?
thanks in advance!
CoopI cannot use standard oracle tablesWhat does this mean? If you have subqueries, i assume you have tables to drive them? You're in an Oracle forum, so i assume the tables are Oracle tables.
If so, you can look into the WITH clause, it can 'cache' the query results for you and reuse them multiple times, also helpful in making large queries with many subqueries more readable. -
Bulk Insert into a Table from CSV file
I have a CSV file with 1000 records and i have to insert those records into a table.
I tried for Bulk Insert command and Load data infile command but it throws error.
Am using Oracle 10g Express Edition.
I want to achieve it thru query command and not by plsql procedures
Please send me query syntax for this problem. . . .
Thanks in Advance,
Hariharan ST.Hi
If you create an external table that points to your csv file you will then be able populate your table from a query.
See: http://www.astral-consultancy.co.uk/cgi-bin/hunbug/doco.cgi?11210
Hope this helps -
Bulk inserts on Solaris slow as compared to windows
Hi Experts,
Looking for tips in troubleshooting 'Bulk inserts on Solaris'. I have observed the same bulk inserts are quite fast on Windows as compared to Solaris. Is there known issues on Solaris?
This is the statement:
I have 'merge...insert...' query which is in execution since long time more than 12 hours now:
merge into A DEST using (select * from B SRC) SRC on (SRC.some_ID= DEST.some_ID) when matched then update ...when not matched then insert (...) values (...)Table A has 600K rows with unique identifier some_ID column, Table B has 500K rows with same some_id column, the 'merge...insert' checks if the some_ID exists, if yes then update query gets fired, when not matched then insert query gets fired. In either case it takes long time to execute.
Environment:
The version of the database is 10g Standard 10.2.0.3.0 - 64bit Production
OS: Solaris 10, SPARC-Enterprise-T5120
These are the parameters relevant to the optimizer:
SQL>
SQL> show parameter sga_target
NAME TYPE VALUE
sga_target big integer 4G
SQL>
SQL> show parameter sga_target
NAME TYPE VALUE
sga_target big integer 4G
SQL>
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL>
SQL> show parameter db_file_multi
NAME YPE VALUE
db_file_multiblock_read_count integer 16
SQL>
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL>
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 07-12-2005 07:13
SYSSTATS_INFO DSTOP 07-12-2005 07:13
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 452.727273
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
Following is the error messages being pushed into oracle alert log file:
Thu Dec 10 01:41:13 2009
Thread 1 advanced to log sequence 1991
Current log# 1 seq# 1991 mem# 0: /oracle/oradata/orainstance/redo01.log
Thu Dec 10 04:51:01 2009
Thread 1 advanced to log sequence 1992
Current log# 2 seq# 1992 mem# 0: /oracle/oradata/orainstance/redo02.logPlease provide some tips to troubleshoot the actual issue. Any pointers on db_block_size,SGA,PGA which are the reasons for this failure?
Regards,
neuronSID, SEQ#, EVENT, WAIT_CLASS_ID, WAIT_CLASS#, WAIT_TIME, SECONDS_IN_WAIT, STATE
125 24235 'db file sequential read' 1740759767 8 -1 *58608 * 'WAITED SHORT TIME'Regarding the disk, I am not sure what needs to be checked, however from output of iostat it does not seem to be busy, check last three row's and %b column is negligible:
tty cpu
tin tout us sy wt id
0 320 3 0 0 97
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 ramdisk1
0.0 2.5 0.0 18.0 0.0 0.0 0.0 8.3 0 1 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 -
How to debug bulk insert?
I have this code which doesn't cause any error, and actually gives message 'query executed successfully', but it doesn't load any data.
bulk insert [dbo].[SPGT]
from '\\sys.local\london-sql\FTP\20140210_SPGT.SPL'
WITH (
KEEPNULLS,
FIRSTROW=5,
FIELDTERMINATOR='\t',
ROWTERMINATOR='\n'
How can I debug the issue, or see what the script is REALLY doing? It's not doing what I think it's doing.
All permissions, rights, etc are setup correctly. I just run the code successfully with a .txt file. Maybe it has something to do with the extension...
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.Yes, here is the final solution (for the benefit of others who find this anytime in the future).
CREATE
TABLE [dbo].[ICM]
Date DATETIME,
Type VARCHAR(MAX),
Change VARCHAR(MAX),
SP_ID VARCHAR(MAX),
Sedol VARCHAR(MAX),
Cusip VARCHAR(MAX),
Issue_Name VARCHAR(MAX),
Cty VARCHAR(MAX),
PE VARCHAR(MAX),
Cap_Range VARCHAR(MAX),
GICS VARCHAR(MAX),
Curr VARCHAR(MAX),
Local_Price DECIMAL(19,8),
Index_Total_Shares DECIMAL(19,8),
IWF DECIMAL(19,8),
Index_Curr VARCHAR(MAX),
Float_MCAP DECIMAL(19,8),
Total_MCAP DECIMAL(19,8),
Daily_Price_Rtn DECIMAL(19,8),
Daily_Total_Rtn DECIMAL(19,8),
FX_Rate DECIMAL(19,8),
Growth_Weight DECIMAL(19,8),
Value_Weight DECIMAL(19,8),
Bloomberg_ID VARCHAR(MAX),
RIC VARCHAR(MAX),
Exchange_Ticker VARCHAR(MAX),
ISIN VARCHAR(MAX),
SSB_ID VARCHAR(MAX),
REIT_Flag VARCHAR(MAX),
Weight
DECIMAL(19,8),
Shares DECIMAL(19,8)
bulk
insert dbo.ICM
from
'C:\Documents and Settings\london\Desktop\ICM.txt'
WITH
FIRSTROW
= 2,
FIELDTERMINATOR
= ',',
ROWTERMINATOR
= '\n'
GO
This was a bit confusing at first, because I've never done it before, and also, I was getting all kinds of errors, which turned out to be numbers in string fields and strings in number fields. Basically, the data that was given to me was totally screwed
up. That compounded the problem exponentially. I finally got the correct data, and I'm all set now.
Thanks everyone!
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it. -
I'm running SQL Server 2008 R2 and trying to test out bcp in one of our databases. For almost all the tables, the bcp and bulk insert work fine using similar commands below. However on a few tables I am experiencing an issue when trying to Bulk Insert
in.
Here are the details:
This is the bcp command to export out the data (via simple batch file):
1.)
SET OUTPUT=K:\BCP_FIN_Test
SET ERRORLOG=C:\Temp\BCP_Error_Log
SET TIMINGS=C:\Temp\BCP_Timings
bcp "SELECT * FROM FS84RPT.dbo.PS_PO_LINE Inner Join FS84RPT.[dbo].[PS_RECV_LN_ACCTG] on PS_PO_LINE.BUSINESS_UNIT = PS_RECV_LN_ACCTG.BUSINESS_UNIT_PO and PS_PO_LINE.PO_ID= PS_RECV_LN_ACCTG.PO_ID and PS_PO_LINE.LINE_NBR= PS_RECV_LN_ACCTG.LINE_NBR WHERE
PS_RECV_LN_ACCTG.FISCAL_YEAR = '2014' and PS_RECV_LN_ACCTG.ACCOUNTING_PERIOD BETWEEN '9' AND '11' " queryout %OUTPUT%\PS_PO_LINE.txt -e %ERRORLOG%\PS_PO_LINE.err -o %TIMINGS%\PS_PO_LINE.txt -T -N
2.)
BULK INSERT PS_PO_LINE FROM 'K:\BCP_FIN_Test\PS_PO_LINE.txt' WITH (DATAFILETYPE = 'widenative')
Msg 4869, Level 16, State 1, Line 1
The bulk load failed. Unexpected NULL value in data file row 2, column 22. The destination column (CNTRCT_RATE_MULT) is defined as NOT NULL.
Msg 4866, Level 16, State 4, Line 1
The bulk load failed. The column is too long in the data file for row 3, column 22. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I've tried a few different things including trying to export as character and import as BULK INSERT PS_PO_LINE FROM 'K:\BCP_FIN_Test\PS_PO_LINE.txt' WITH (DATAFILETYPE = 'char')
But no luck
Appreciate helpIt seems that the target table does not match your expectations.
Since I don't know exactly what you are doing, I will have to resort to guesses.
I note that you export query goes:
SELECT * FROM FS84RPT.dbo.PS_PO_LINE Inner Join
And then you are importing into a table called PS_PO_LINE as well. But for your operation to make sense the import PS_PO_LINE must not only have the columns from the PS_PO_LINE, but also all columns from PS_RECV_LN_ACCTG. Maybe your SELECT should read
SELECT PS_PO_LINE.* FROM FS84RPT.dbo.PS_PO_LINE Inner Join
or use an EXISTS clause to add the filter of PS_RECV_LN_ACCTG table. (Assuming that it appears in the query for filtering only.)
Erland Sommarskog, SQL Server MVP, [email protected] -
SQL Server 2008 - RS - Loop of multiple Bulk Inserts
Hi,
I want to import multiple flat files to a table on SQL Server 2008 R2. However, I don't have access to Integration Services to use a foreach loop, so I'm doing the process using T-SQL. Actually, I'm using manually code to which file to introduce the data on
tables. My code are like this:
cREATE TABLE #temporaryTable
[column1] [varchar](100) NOT NULL,
[column2 [varchar](100) NOT NULL
BULK
INSERT #temp
FROM 'C:\Teste\testeFile01.txt'
WITH
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
FIRSTROW = 1
GO
BULK
INSERT #temp
FROM 'C:\Teste\testeFile02.txt'
WITH
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
FIRSTROW = 1
GO
-------------------------------------------------INSERT INTO dbo.TESTE ( Col_1, Col_2)
Select RTRIM(LTRIM([column1])), RTRIM(LTRIM([column2])) From #temporaryTable
IF EXISTS(SELECT * FROM #temporaryTable) drop table #temporaryTable
The problem is that I have 20 flat files to Insert... Do I have any loop solution in T-SQL to insert all the flat files on same table?
Thanks!Here is a working sample of powershell script I adopted from internet( I don't have the source handy now).
Import-Module -Name 'SQLPS' -DisableNameChecking
$workdir="C:\temp\test\"
$svrname = "MC\MySQL2014"
Try
#Change default timeout time from 600 to unlimited
$svr = new-object ('Microsoft.SqlServer.Management.Smo.Server') $svrname
$svr.ConnectionContext.StatementTimeout = 0
$table="test1.dbo.myRegions"
#remove the filename column in the target table
$q1 = @"
Use test1;
IF COL_LENGTH('dbo.myRegions','filename') IS NOT NULL
BEGIN
ALTER TABLE test1.dbo.myRegions DROP COLUMN filename;
END
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $q1
$dt = (get-date).ToString("yyyMMdd")
$formatfilename="$($table)_$($dt).xml"
$destination_formatfilename ="$($workdir)$($formatfilename)"
$cmdformatfile="bcp $table format nul -c -x -f $($destination_formatfilename) -T -t\t -S $($svrname) "
Invoke-Expression $cmdformatfile
#Delay 1 second
Start-Sleep -s 1
$q2 = @"
Alter table test1.dbo.myRegions Add filename varchar(500) Null;
#add the filename column to the target table
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $q2
$files = Get-ChildItem $workdir
$items = $files | Where-Object {$_.Extension -eq ".txt"}
for ($i=0; $i -lt $items.Count; $i++) {
$strFileName = $items[$i].Name
$strFileNameNoExtension= $items[$i].BaseName
$query = @"
BULK INSERT test1.dbo.myRegions from '$($workdir)$($strFileName)' WITH (FIELDTERMINATOR = '\t', FIRSTROW = 2, FORMATFILE = '$($destination_formatfilename)');
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -Query $query -querytimeout 65534
#Delay 10 second
Start-Sleep -s 10
# Update the filename column
Invoke-Sqlcmd -ServerInstance $svr.Name -Database master -querytimeout 65534 -Query "Update test1.dbo.myRegions SET filename= '$($strFileName)' WHERE filename is null; "
# Move uploaded file to archive
If ((Test-Path "$($workdir)$($strFileName)") -eq $True) { Move-Item -Path "$($workdir)$($strFileName)" -Destination "$($workdir)Processed\$($strFileNameNoExtension)_$($dt).txt"}
Catch [Exception]
write-host "--$strFileName "$_.Exception.Message -
First Row Record is not inserted from CSV file while bulk insert in sql server
Hi Everyone,
I have a csv file that needs to be inserted in sql server. The csv file will be format will be like below.
1,Mr,"x,y",4
2,Mr,"a,b",5
3,Ms,"v,b",6
While Bulk insert it coniders the 2nd column as two values (comma separte) and makes two entries .So i used filelterminator.xml.
Now, the fields are entered into the column correctly. But now the problem is, the first row of the csv file is not reading in sql server. when i removed the terminator, i can get the all records. But i must use the above code terminator. If
am using means, am not getting the first row record.
Please suggests me some solution.
Thanks,
SelvamHi,
I have a csv file (comma(,) delimited) like this which is to be insert to sql server. The format of the file when open in notepad like below:
Id,FirstName,LastName,FullName,Gender
1,xx,yy,"xx,yy",M
2,zz,cc,"zz,cc",F
3,aa,vv,"aa,vv",F
The below is the bulk insert query which is used for insert above records,
EXEC(BULK INSERT EmployeeData FROM '''+@FilePath+'''WITH
(formatfile=''d:\FieldTerminator.xml'',
ROWTERMINATOR=''\n'',
FIRSTROW=2)'
Here, I have used format file for the "Fullname" which has comma(,) within the field. The format file is:
The problem is , it skip the first record (1,xx,yy,"xx,yy",M) when i use the format file. When i remove the format file from the query, it takes all the records but the "fullName" field makes the problem because of comma(,) within the
field. So i must use the format file to handle this. So please suggest me , why the first record skipped always when i use the above format file.
If i give the "FirstRow=1" in bulk insert, it shows the "String or binary data would be truncated.
The statement has been terminated." error. I have checked the datatype length.
Please update me the solution.
Regards,
Selvam. M -
Number of rows inserted is different in bulk insert using select statement
I am facing a problem in bulk insert using SELECT statement.
My sql statement is like below.
strQuery :='INSERT INTO TAB3
(SELECT t1.c1,t2.c2
FROM TAB1 t1, TAB2 t2
WHERE t1.c1 = t2.c1
AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
EXECUTE IMMEDIATE strQuery ;
These SQL statements are inside a procedure. And this procedure is called from C#.
The number of rows returned by the "SELECT" query is 70.
On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
Anybody faced these kind of issues?
Can anyone tell what would be the reason of this issue..? any other work around for this...?
I am using Oracle 10g R2 version.
Edited by: user13339527 on Jun 29, 2010 3:55 AM
Edited by: user13339527 on Jun 29, 2010 3:56 AMYou have very likely concurrent transactions on the database:
>
By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
>
If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
You can try to run your test with:
set transaction isolation level serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
like:
wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46 -
Hi All,
I have a cursor in PL/SQL block and it selects the data for about 100 columns, my requirement is to insert the data of about 60 columns into one table and rest of the columns into another. Please let me know how do I implement the same by using BULK INSERT.
Thanks for your help in advance.Why not dispense with the CURSOR and instead use a multi-table INSERT...SELECT?
INSERT ALL
INTO table1 (col1, col2, ..., col60)
VALUES (col1, col2, ..., col60)
INTO table2 (col61,...col100)
VALUES (col61,...col100)
SELECT col1,..., col100
FROM etc.where the SELECT is doing the same query as the CURSOR.
Edited by: user142857 on Nov 12, 2009 10:06 AM -
I am trying to do bulk insert to MS Access database from text file. One of the solutions recommended by bbritta is as follows
import java.sql.*;
public class Test3 {
public static void main(String[] arghs) {
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
String filename = "C:/DB1.mdb";
String database =
"jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=C:/DB1.MDB";
Connection con = DriverManager.getConnection(database, "", "");
Statement statement = con.createStatement();
statement.execute("INSERT INTO Table1 SELECT * FROM [Text;Database=C:\\;HDR=YES].[TextFile.txt]");
statement.close();
con.close();
} catch (Exception e) { e.printStackTrace(); }
}Whenever I try to use that approach, I get error message
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Number of query values and destination fields are not the same.
at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6958)
at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7115)
at sun.jdbc.odbc.JdbcOdbc.SQLExecDirect(JdbcOdbc.java:3111)
at sun.jdbc.odbc.JdbcOdbcStatement.execute(JdbcOdbcStatement.java:338)
Fields in Access destination tables are exactly the same as in text field and I still get an error message. I could manually import to Access from same file without any problem.
I was wondering if someone out there could suggest another approach.>
1) Is there a type-4 JDBC connector available to
connect directly to MS Access databases and if so
would it be difficult to implement or migrate to?
This is important because dbAnywhere does not appear
to be supported on Windows 2000, which is the
platform we are migrating to. We need to eliminate
dbAnywhere if possible.
By definition no such driver can exist. A type 4 driver is java only and connects directly to the database. Excluding file writes the only connection method is via sockets and there is nothing for a socket to connect to in a MS Access database - MS Access doesn't work that way.
You can look into type 3 driver. I believe there are a number of them. They use an intermediate server. Search here http://industry.java.sun.com/products/jdbc/drivers
You could implement your own using RmiJdbc at http://www.objectweb.org/. However I personally would think that that would require a serious long look at security issues before exposing a solution to the internet. -
Blob truncated with DbFactory and Bulk insert
Hi,
My platform is a Microsoft Windows Server 2003 R2 Server 5.2 Service Pack 2 (64-bit) with an Oracle Database 11g 11.1.0.6.0.
I use the client Oracle 11g ODAC 11.1.0.7.20.
Some strange behavior appends when used DbFactory and bulk command with Blob column and parameter with a size larger than 65536bytes. Let me explain.
First i create a dummy table in my schema :
create table dummy (a number, b blob)To use bulk insert we can use the code A with oracle object (succes to execute) :
byte[] b1 = new byte[65530];
byte[] b2 = new byte[65540];
Oracle.DataAccess.Client.OracleConnection conn = new Oracle.DataAccess.Client.OracleConnection("User Id=login;Password=pws;Data Source=orcl;");
OracleCommand cmd = new OracleCommand("insert into dummy values (:p1,:p2)", conn);
cmd.ArrayBindCount = 2;
OracleParameter p1 = new OracleParameter("p1", OracleDbType.Int32);
p1.Direction = ParameterDirection.Input;
p1.Value = new int[] { 1, 2 };
cmd.Parameters.Add(p1);
OracleParameter p2 = new OracleParameter("p2", OracleDbType.Blob);
p2.Direction = ParameterDirection.Input;
p2.Value = new byte[][] { b1, b2 };
cmd.Parameters.Add(p2);
conn.Open(); cmd.ExecuteNonQuery(); conn.Close();We can write the same thing with an abstract level when used the DbProviderFactories (code B) :
var factory = DbProviderFactories.GetFactory("Oracle.DataAccess.Client");
DbConnection conn = factory.CreateConnection();
conn.ConnectionString = "User Id=login;Password=pws;Data Source=orcl;";
DbCommand cmd = conn.CreateCommand();
cmd.CommandText = "insert into dummy values (:p1,:p2)";
((OracleCommand)cmd).ArrayBindCount = 2;
DbParameter param = cmd.CreateParameter();
param.ParameterName = "p1";
param.DbType = DbType.Int32;
param.Value = new int[] { 3, 4 };
cmd.Parameters.Add(param);
DbParameter param2 = cmd.CreateParameter();
param2.ParameterName = "p2";
param2.DbType = DbType.Binary;
param2.Value = new byte[][] { b1, b2 };
cmd.Parameters.Add(param2);
conn.Open(); cmd.ExecuteNonQuery(); conn.Close();But this second code doesn't work, the second array of byte is truncated to 4byte. It seems to be an int16 overtaking.
When used a DbTYpe.Binary, oracle use an OracleDbType.Raw for mapping and not an OracleDbType.Blob, so the problem seems to be with raw type, BUT if we use the same code without bulk insert, it's worked !!! The problem is somewhere else...
Why used an DbConnection ? To be able to switch easy to an another database type.
So why used "((OracleCommand)cmd).ArrayBindCount" ? To be able to used specific functionality of each database.
I can fix the issue when casting DbParameter as OracleParameter and fix the OracleDbType to Blob, but why second code does not working with bulk and working with simple query ?BCP and BULK INSERT does not work the way you expect them do. What they do is that they consume fields in a round-robin fashion. That is, they first looks for data for the first field, then for the second field and so on.
So in your case, they will first read one byte, then 20 bytes etc until they have read the two bytes for field 122. At this point they will consume bytes until they have found a sequence of carriage return and line feed.
You say that some records in the file are incomplete. Say that there are only 60 fields in this file. Field 61 is four bytes. BCP and BULK INSERT will now read data for field 61 as CR+LF+the first two bytes in the next row. CR+LF has no special meaning,
but they are just data at this point.
You will have to write a program to parse the file, or use SSIS. But BCP and BULK INSERT are not your friends in this case.
Erland Sommarskog, SQL Server MVP, [email protected] -
Bulk insert into oracle using ssis
Hi ,
Can someone please suggest me the way to Bulk insert data into oracle database? I'm using oledb which doesnt support bulk insert into oracle.
Pls note I cant use Oracle ATTUnity as it requires enterprise edition but i have only Standard edition and hence that option is ruled out.
Is there any other way that I can accompolish BULK insert?
Please help me out.
Thanks,
PrabhuHi Prabhu,
I am very late to help you solve the query but following is the solution to 'Bulk Insert into Oracle' that worked for me.
To use below code for SSIS 2008 R2 in a
Script Task component you would need following API references.
Prerequisites:
1. C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies\Microsoft.SQLServer.DTSRuntimeWrap.dll
2. Install "Oracle Data Provider For .NET 11.2.0.1.0" and add a reference to
Oracle.DataAccess.dll.
Microsoft SQL Server Integration Services Script Task
Write scripts using Microsoft Visual C# 2008.
The ScriptMain is the entry point class of the script.
* Description : SQL to Oracle Bulk Copy/Insert
* Created By : Mitulkumar Brahmbhatt
* Created Date: 08/14/2014
* Modified Date Modified By Description
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using Oracle.DataAccess.Client;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
using System.Data.OleDb;
namespace ST_6e18a76102dd4312868504c4ef95279d.csproj
[System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
#region VSTA generated code
enum ScriptResults
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
#endregion
public void Main()
ConnectionManager cm;
IDTSConnectionManagerDatabaseParameters100 cmParams;
OleDbConnection oledbConn;
DataSet ds = new DataSet();
string sql;
try
/********** Pull Sql Source Data into a Dataset *************/
cm = Dts.Connections["SRC_CONN"];
cmParams = cm.InnerObject as IDTSConnectionManagerDatabaseParameters100;
oledbConn = (OleDbConnection)cmParams.GetConnectionForSchema() as OleDbConnection;
sql = @"Select * from [sourcetblname]'';
OleDbCommand sqlComm = new OleDbCommand(sql, oledbConn);
OleDbDataAdapter da = new OleDbDataAdapter(sqlComm);
da.Fill(ds);
cm.ReleaseConnection(oledbConn);
/***************** Bulk Insert to Oracle *********************/
cm = Dts.Connections["DEST_CONN"];
cmParams = cm.InnerObject as IDTSConnectionManagerDatabaseParameters100;
string connStr = ((OleDbConnection)cmParams.GetConnectionForSchema() as OleDbConnection).ConnectionString;
cm.ReleaseConnection(oledbConn);
sql = "destinationtblname";
using (OracleBulkCopy bulkCopy = new OracleBulkCopy(connStr.Replace("Provider=OraOLEDB.Oracle.1", "")))
bulkCopy.DestinationTableName = sql;
bulkCopy.BatchSize = 50000;
bulkCopy.BulkCopyTimeout = 20000;
bulkCopy.WriteToServer(ds.Tables[0]);
/***************** Return Result - Success *********************/
Dts.TaskResult = (int)ScriptResults.Success;
catch (Exception x)
Dts.Events.FireError(0, "BulkCopyToOracle", x.Message, String.Empty, 0);
Dts.TaskResult = (int)ScriptResults.Failure;
finally
ds.Dispose();
Mitulkumar Brahmbhatt | Please mark the post(s) that answered your question. -
Sub-SELECT in Bulk INSERT- Performance Clarification
I have 2 tables- emp_new & emp_old. I need to load all data from emp_old to emp_new. There is a transaction_id column in emp_new whose value needs to be fetched from a main_transaction table which also includes a Region Code column. Something like -
TRANSACTION_ID REGION_CODE
100 US
101 AMER
102 APAC
My bulk insert query looks like this -
INSERT INTO emp_new
(col1,
col2,
transaction_id)
SELECT
col1,
col2,
*(select transaction_id from main_transaction where region_code = 'US')*
FROM emp_old
There would be millions of rows which need to be loaded in this way. I would like to know if the sub-SELECT to fetch the transaction_id would be re-executed for every row, which would be very costly and I'm actually looking for a way to avoid this. The main_transcation table is a pre-loaded table and its values are not going to change. Is there a way (via some HINT) to indicate that the sub-SELECT should not get re-executed for every row ?
On a different note, the execution plan of the above bulk INSERT looks like -
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | INSERT STATEMENT | | 11M| 54M| 6124 (4)|
| 1 | INDEX FAST FULL SCAN| EMPO_IE2_IDX | 11M| 54M| 6124 (4)|
EMPO_IE2_IDX -> Index on emp_old
I'm surprised to see that the table main_transaction does not feature in the execution plan at all. Does this mean that the sub-SELECT will not get re-executed for every row? However, atleast for the first read, I would assume that the table should appear in the plan.
Can someone help me in understanding this ?Dear
From 10.2, AUTOTRACE uses DBMS_XPLAN anywayYes but with the remark that it uses the estimated part of DBMS_XPLAN i.e explain plan for + select * from table(dbms_xplan.display);
Isn'it ?
mhouri> cl scr
mhouri> desc t
Name Null? Type
ID VARCHAR2(10)
NAME VARCHAR2(100)
mhouri> set linesize 150
mhouri> var x number
mhouri> exec :x:=99999
PL/SQL procedure successfully completed.
mhouri> explain plan for
2 select sum(length(name)) from t where id > :x;
Explained.
mhouri> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1188118800
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 23 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 23 | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 58 | 1334 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | I | 11 | | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
3 - access("ID">:X)
15 rows selected.
mhouri> set autotrace on
mhouri> select sum(length(name)) from t where id > :x;
SUM(LENGTH(NAME))
10146
Execution Plan
Plan hash value: 1188118800
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 23 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 23 | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 58 | 1334 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | I | 11 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("ID">:X)
Statistics
0 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
232 bytes sent via SQL*Net to client
243 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
mhouri> set autotrace off
mhouri> select sum(length(name)) from t where id > :x;
SUM(LENGTH(NAME))
10146
mhouri> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
SQL_ID 7zm570j6kj597, child number 0
select sum(length(name)) from t where id > :x
Plan hash value: 1842905362
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 5 (100)| |
| 1 | SORT AGGREGATE | | 1 | 23 | | |
|* 2 | TABLE ACCESS FULL| T | 59 | 1357 | 5 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(TO_NUMBER("ID")>:X)
19 rows selected.
mhouri> spool offBest regards
Mohamed Houri
Maybe you are looking for
-
[JS CS3] Metadata getProperty path
Hi, How to get the values of the members in a container in the metadata? I created my own xmp-document which contains this fragment: A B I tried this code: myMetadatPreferences.getProperty("http://www.kvd.fake/ns#", "kvd:a/rdf:Seq/rdf:li[1]"); But is
-
Does quicktime support flash video or flv format
Does quicktime support flash video or flv format
-
Re installed software still have error 16
i uninstalled CS 6. Used Creative Cloud Cleaner. Reinstalled CS6. And I still get Error 16. What do I do know?
-
I am using 10.4.3 with a PB G4 15". When I go to the Startup Disk pane in System Preferences, my OS 9 System 9.2 is not listed as a potential startup disk. It used to be but it isn't now. I don't know when this occurred, but I think before 10.4.3. Wh
-
Show WBS Element as Default in Travel Expenses - Cost Assignment
Dear All, As per Standard Cost Center is shown as default in drop down in Travel Expenses. I want to enable WBS element as default. How can I make wbs element as default drop down. Regards, Kamal