First Row Record is not inserted from CSV file while bulk insert in sql server

Hi Everyone,
I have a csv file that needs to be inserted in sql server. The csv file will be format will be like below.
1,Mr,"x,y",4
2,Mr,"a,b",5
3,Ms,"v,b",6
While Bulk insert it coniders the 2nd column as two values (comma separte) and makes two entries .So i used filelterminator.xml.  
Now, the fields are entered into the column correctly. But now the problem is, the first row of the csv file is not reading in sql server. when i removed the  terminator,  i can get the all records. But i must use the above code terminator. If
am using means, am not getting the first row record.
Please suggests me some solution.
Thanks,
Selvam

Hi,
I have a csv file (comma(,) delimited) like this which is to be insert to sql server. The format of the file when open in notepad like below:
Id,FirstName,LastName,FullName,Gender
1,xx,yy,"xx,yy",M
2,zz,cc,"zz,cc",F
3,aa,vv,"aa,vv",F
The below is the bulk insert query which is used for insert above records,
EXEC(BULK INSERT EmployeeData FROM '''+@FilePath+'''WITH
(formatfile=''d:\FieldTerminator.xml'',
ROWTERMINATOR=''\n'',
FIRSTROW=2)'
Here, I have used format file for the "Fullname" which has comma(,) within the field. The format file is:
The problem is , it skip the first record (1,xx,yy,"xx,yy",M) when i use the format file. When i remove the format file from the query, it takes all the records but the "fullName" field makes the problem because of comma(,) within the
field. So i must use the format file to handle this. So please suggest me , why the first record skipped always when i use the above format file.
If i give the "FirstRow=1" in bulk insert, it shows the "String or binary data would be truncated.
The statement has been terminated." error. I have checked the datatype length.
Please update me the solution.
Regards,
Selvam. M

Similar Messages

  • Obtaining a rowcount from a csv file before bulk insert

    I have a group of csv files that i am bulk inserting into my sql table.  They are varying lengths, but the last seven lines of every file are not lines of data that comply with the format of my table and are causing insert errors.  I know i can
    specify a "lastrow" value for the bulk insert, but my problem is that i need a rowcount before running the statement and I can't seem to figure out a way of getting it with how im accessing the file.  (example below of what i'm doing without
    checking for the qty of rows):
    CREATE TABLE ALLFILENAMES(WHICHPATH VARCHAR(255),WHICHFILE varchar(255))
        --some variables
        declare @filename varchar(255),
                @path     varchar(255),
                @sql      varchar(8000),
                @cmd      varchar(1000),
    @lastrow  integer
        --get the list of files to process:
    SET @path = '\\chewbacca\PlantFloorData\RLData\'
        SET @cmd = 'dir ' + @path + '*.csv /b'
    print @path
    print @cmd
        INSERT INTO  ALLFILENAMES(WHICHFILE)
        EXEC Master..xp_cmdShell @cmd
        UPDATE ALLFILENAMES SET WHICHPATH = @path where WHICHPATH is null
        --cursor loop
        declare c1 cursor for SELECT WHICHPATH,WHICHFILE FROM ALLFILENAMES where WHICHFILE like '%.csv%'
        open c1
        fetch next from c1 into @path,@filename
        While @@fetch_status <> -1
          begin
      set @lastrow = 3 -- for now just to make sure logic below works
      print @lastrow
     --bulk insert won't take a variable name, so make a sql and execute it instead:
           set @sql = 'BULK INSERT HMIDATA.dbo.RawData FROM ''' + @path + @filename + ''' '
               + '     WITH ( 
                       FIELDTERMINATOR = '','', 
                       ROWTERMINATOR = ''\n'', 
                       FIRSTROW = 2 ,
       LASTROW = ' + STR(@lastrow) + '
        print @sql
        exec (@sql)
          fetch next from c1 into @path,@filename
          end
        close c1
        deallocate c1
        --Extras
        --delete from ALLFILENAMES where WHICHFILE is NULL
        --select * from ALLFILENAMES
        drop table ALLFILENAMES

    You could do a SELECT COUNT(*) FROM OPENROWSET(BULK), but you would need a format file that specifies a single-field record format with \r\n as the terminator. With the COUNT(*) you could determine the correct value for LASTROW.
    You could also load the entire file into a two-column table where one column is an IDENTITY column and the other is the raw data from the line in the file. And then you could crack the good rows.
    ...or you could write a client-side program or a SSIS package for the task.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Loading 361000 records at a time from csv file

    Hi,
    One of my collegue loaded 361000 records from one file file , how is this possible as excel accepts 65536 records in one file
    and even in the infopackage the following are selected what does this mean
    Data Separator   ;
    Escape Sign      "
    Separator for Thousands   .
    Character Used for Decimal Point   ,
    Pls let me know

    hi Maya,
    it just possible, other than ms-excel, we have editor like Textpad that support more 65k rows (and windows Notepad), the file may be generated by program or edited outside in excel, or newer version of excel is used, ms-excel 2007 support more 1 million rows.
    e.g we have csv file
    customer;product;quantity;revenue
    a;x;"1.250,25";200
    b;y;"5.5";300
    data separator ;
    - char/delimeter used to separate field, e.g
    escape sign, e.g
    - "1.250,25";200 then quantity = 1.250,25
    separator for thousands = .
    - 1.250,25 means one thousand two hundred ...
    char used for decimal point
    - - 1.250<b>,</b>25
    check
    http://help.sap.com/saphelp_nw70/helpdata/en/80/1a6581e07211d2acb80000e829fbfe/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/c2/678e3bee3c9979e10000000a11402f/frameset.htm
    hope this helps.

  • How to update Records of SAP table from .CSV file

    Hi,
    I have written a code which takes a data from (,) delimited CSV file and adds it into the Internal table.
    Now I want to update the all fields in SAP table with the Internal table.
    I want to use Update statement.
    Update <table Name> SET <field needs to update> WHERE connditon.
    I don't want to iterate through thousand on record in sap table to check the where condition.
    Could you please tell me how to do it.

    Hi. I thing you will not workaround the iterating of the internal table.
    You can pre-load all the records into another internal table
    check lt_csv[] is not initial. " important, otherwise next select would read all records of the table
    select .... into lt_dbitab for all entries in lt_csv where key-fieds = lt_csv-key_fields...
    check sy-subrc eq 0 and lt_dbitab[] is not initial.
    then do in-memory update of the it_dbitab
    loop at it_dbitab assign <fs>.
      read table lt_csv assign <fs_csv> with key ... " lt_csv should be sorted table with key, or you should try to use binary search
      if sy-subrc eq 0.
        ...change required lt_dbitab fields: <fs>-comp = <fs_csv>-comp...
      endif.
    endloop.
    ant then you can do mass-update
    update dbtab from table lt_dbitab.
    From performance view, this solution should be much faster than iterating lt_csv directly and updating every single database record
    Br
    Bohuslav

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Import From CSV File statement runs forever, no error, does not finish

    Hello,
    I am trying to import a CSV file in a JAVA program, with the following statement:
    IMPORT FROM CSV FILE '/debug/testdatabase/FILE.csv'
    INTO "JOSEPH"."TEST_TABLE"
    WITH COLUMN LIST IN FIRST ROW
    RECORD DELIMITED BY '\n'
    FIELD DELIMITED BY '\t' ERROR LOG '/debug/testdatabase/file.err'
    THREADS 10
    BATCH 10000
    I have two HANA instances on different machines A and B:
    Both machines run HANA version 1.00.74.00.389160 (NewDB100_REL), while the OS is
    SUSE Linux Enterprise Server 11.1 on machine A and
    SUSE Linux Enterprise Server 11.2 on machine B.
    The statement above runs fine on machine A and the rows are imported properly from JAVA as well as when executed from HANA Studio SQL console.
    If I copy the file to machine B and try the exact same statement with the same file, it does not finish (neither from JAVA nor from HANA Studio SQL console). There is no error either. It cannot be cancelled, only a HANA restart stops the statement. Also the sample file I use has only 2 rows, and memory does not seem to be a problem.
    I seem to have a similar problem to the one described here, but the answers there do not help me: http://scn.sap.com/thread/3396582 I specified the record delimiter, and I used a python script to check for any strange characters that are not supposed to be there, but didn't find any.
    If I copy the file to my windows PC and use the "File Menu -> Import -> SAP Hana Content -> Data from Local file" function, it imports the file correctly into B, but I need to be able to do it from JAVA.
    Machine A administration view:
    Machine B administration view:
    If you have any idea what might cause this behavior or where I can find more information on this problem please give me a hint.

    Hi Joseph,
    First from the pics, the revision of your SAP HANA instance is 73 instead of 74. Since I have no identical environment, I cannot test it for you. But can you try the simplest scenario? You can create a table with only one column table try to import a CSV file with only one row.
    Best regards,
    Wenjun

  • Bulk Insert into a Table from CSV file

    I have a CSV file with 1000 records and i have to insert those records into a table.
    I tried for Bulk Insert command and Load data infile command but it throws error.
    Am using Oracle 10g Express Edition.
    I want to achieve it thru query command and not by plsql procedures
    Please send me query syntax for this problem. . . .
    Thanks in Advance,
    Hariharan ST.

    Hi
    If you create an external table that points to your csv file you will then be able populate your table from a query.
    See: http://www.astral-consultancy.co.uk/cgi-bin/hunbug/doco.cgi?11210
    Hope this helps

  • Loading records from .csv file to SAP table via SAP Program

    Hi,
    I have a .csv file with 132,869 records and I am trying to load it to an SAP table with a customized SAP program.
    After executing the program, only 99,999 records are being loaded into the table.
    Is there some setting to define how many records can be loaded into a table? Or what else could be the problem?
    Pls advice.
    Thanks!!!

    hi Arun ,
    A datasource need a extract structure to fetch data .It is nothing but a temp table to hold data.
    First you need to create atable in SE11 with fields coming from CSV file.
    Then you need to write a report program to read you CSV file and populate your table in BW .
    Then you can create a datasource on top of this table .
    After that replicate and load data at PSA and use to upper flow.
    Regards,
    Jaya Tiwari

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • The name of my wireless network in the form of my modem router is not read from the iPhone, while the first era.Il my ipad does not read the network but I surf the internet. Why? What can I do?

    The name of my wireless network in the form of my modem router is not read from the iPhone, while the first era.Il my ipad does not read the network but I surf the internet. Why? What can I do?

    Can I use this DIR-635 to extend the WiFi network of the "Belgacom" WiFi?
    To answer this question correctly, a user would need to have a DIR-635 router connected by Ethernet to a Belgacom router.
    If you think about it, the chances of another user who has these two same devices configured this way and also being on an Apple support forum to see your post are about zero.
    The following might work, but you will not know until you try.
    Configure the DIR-635 to provide a wireless network that uses the exact same wireless network name, exact same wireless security and exact same password as the Belgacom wireless router. Make sure that the DIR-635 is configured in Bridge Mode.
    Can I purchase and set-up an Apple Airport Express or Apple Airport Extreme on my level 1 (the ethernet cable running from level 2 to level 1 could plug-into either of those)
    The AirPort Express or AirPort Extreme would need to be configured exactly the same way as mentioned above.
    (What is the difference in the 2 -- one is about twice the price of the other…)
    http://www.apple.com/wifi/

  • How to retrieve rows from .csv file

    Hi
    I need to retrieve rows from .csv file , anybody's kind support is greatly appreciated.
    Thanks in advance.
    Khiz_eng

    a .csv file is just a comma delimited text file. each row contains data separated by a comma. To read this information in, do as Hartmut suggested to your earlier post on this exact topic. "I would read that file with a FileReader / BufferedReader and parse each line with StringTokenizer. This would be quite the same like working through a result set." For more info on how to read text files, see any java book or browse through the i/o tutorial.
    Jamie

  • SQL bulk copy from csv file - Encoding

    Hi Experts
    This is the first time I am creating a PowerShell script and it is almost working. I just have some problems with the actual bulk import to SQL encoding from the text file since it replaces
    special characters with a question mark. I have set the encoding when creating the csv file but that does not seem to reflect on the actual bulk import. I have tried difference scenarios with the encoding part but I cannot find the proper solution for that.
    To shortly outline what the script does:
    Connect to Active Directory fetching all user - but excluding users in specific OU's
    Export all users to a csv in unicode encoding
    Strip double quote text identifiers (if there is another way of handling that it will be much appreciated)
    Clear all records temporary SQL table
    Import records from csv file to temporary SQL table (this is where the encoding is wrong)
    Update existing records in another table based on the records in the temporary table and insert new record if not found.
    The script looks as the following (any suggestions for optimizing the script are very welcome):
    # CSV file variables
    $path = Split-Path -parent "C:\Temp\ExportADUsers\*.*"
    $filename = "AD_Users.csv"
    $csvfile = $path + "\" + $filename
    $csvdelimiter = ";"
    $firstRowColumns = $true
    # Active Directory variables
    $searchbase = "OU=Users,DC=fabrikam,DC=com"
    $ADServer = 'DC01'
    # Database variables
    $sqlserver = "DB02"
    $database = "My Database"
    $table = "tblADimport"
    $tableEmployee = "tblEmployees"
    # Initialize
    Write-Host "Script started..."
    $elapsed = [System.Diagnostics.Stopwatch]::StartNew()
    # GET DATA FROM ACTIVE DIRECTORY
    # Import the ActiveDirectory Module
    Import-Module ActiveDirectory
    # Get all AD users not in specified OU's
    Write-Host "Retrieving users from Active Directory..."
    $AllADUsers = Get-ADUser -server $ADServer `
    -searchbase $searchbase -Filter * -Properties * |
    ?{$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com' `
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'}
    Write-Host "Users retrieved in $($elapsed.Elapsed.ToString())."
    # Define labels and get specific user fields
    Write-Host "Generating CSV file..."
    $AllADUsers |
    Select-Object @{Label = "UNID";Expression = {$_.objectGuid}},
    @{Label = "FirstName";Expression = {$_.GivenName}},
    @{Label = "LastName";Expression = {$_.sn}},
    @{Label = "EmployeeNo";Expression = {$_.EmployeeID}} |
    # Export CSV file and remove text qualifiers
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    Write-Host "Removing text qualifiers..."
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Write-Host "CSV file created in $($elapsed.Elapsed.ToString())."
    # DATABASE IMPORT
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data")
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
    $batchsize = 50000
    # Delete all records in AD import table
    Write-Host "Clearing records in AD import table..."
    Invoke-Sqlcmd -Query "DELETE FROM $table" -Database $database -ServerInstance $sqlserver
    # Build the sqlbulkcopy connection, and set the timeout to infinite
    $connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
    $bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
    $bulkcopy.DestinationTableName = $table
    $bulkcopy.bulkcopyTimeout = 0
    $bulkcopy.batchsize = $batchsize
    # Create the datatable and autogenerate the columns
    $datatable = New-Object System.Data.DataTable
    # Open the text file from disk
    $reader = New-Object System.IO.StreamReader($csvfile)
    $columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
    if ($firstRowColumns -eq $true) { $null = $reader.readLine()}
    Write-Host "Importing to database..."
    foreach ($column in $columns) {
    $null = $datatable.Columns.Add()
    # Read in the data, line by line
    while (($line = $reader.ReadLine()) -ne $null) {
    $null = $datatable.Rows.Add($line.Split($csvdelimiter))
    $i++; if (($i % $batchsize) -eq 0) {
    $bulkcopy.WriteToServer($datatable)
    Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
    $datatable.Clear()
    # Add in all the remaining rows since the last clear
    if($datatable.Rows.Count -gt 0) {
    $bulkcopy.WriteToServer($datatable)
    $datatable.Clear()
    # Clean Up
    Write-Host "CSV file imported in $($elapsed.Elapsed.ToString())."
    $reader.Close(); $reader.Dispose()
    $bulkcopy.Close(); $bulkcopy.Dispose()
    $datatable.Dispose()
    # Sometimes the Garbage Collector takes too long to clear the huge datatable.
    [System.GC]::Collect()
    # Update tblEmployee with imported data
    Write-Host "Updating employee data..."
    $queryUpdateUsers = "UPDATE $($tableEmployee)
    SET $($tableEmployee).EmployeeNumber = $($table).EmployeeNo,
    $($tableEmployee).FirstName = $($table).FirstName,
    $($tableEmployee).LastName = $($table).LastName,
    FROM $($tableEmployee) INNER JOIN $($table) ON $($tableEmployee).UniqueNumber = $($table).UNID
    IF @@ROWCOUNT=0
    INSERT INTO $($tableEmployee) (EmployeeNumber, FirstName, LastName, UniqueNumber)
    SELECT EmployeeNo, FirstName, LastName, UNID
    FROM $($table)"
    try
    Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query $queryUpdateUsers
    Write-Host "Table $($tableEmployee) updated in $($elapsed.Elapsed.ToString())."
    catch
    Write-Host "An error occured when updating $($tableEmployee) $($elapsed.Elapsed.ToString())."
    Write-Host "Script completed in $($elapsed.Elapsed.ToString())."

    I can see that the Export-CSV exports into ANSI though the encoding has been set to UNICODE. Thanks for leading me in the right direction.
    No - it exports as Unicode if set to.
    Your export was wrong and is exporting nothing. Look closely at your code:
    THis line exports nothing in Unicode"
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    There is no input object.
    This line converts any file to ansi
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Set-Content defaults to ANSI so the output file is converted.
    Since you are just dumping into a table by manually building a recorset why not just go direct.  You do not need a CSV.  Just dump theresults of the query to a datatable.
    https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
    This script dumps to a datatable object which can now be used directly in a bulkcopy.
    Here is an example of how easy this is using your script:
    $AllADUsers = Get-ADUser -server $ADServer -searchbase $searchbase -Filter * -Properties GivenName,SN,EmployeeID,objectGUID |
    Where{
    $_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com'
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'
    } |
    Select-Object @{N='UNID';E={$_.objectGuid}},
    @{N='FirstName';Expression = {$_.GivenName}},
    @{N='LastName';Expression = {$_.sn}},
    @{N=/EmployeeNo;Expression = {$_.EmployeeID}} |
    Out-DataTable
    $AllDUsers is now a datatable.  You can just upload it.
    ¯\_(ツ)_/¯

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • Display data from CSV file in iWeb page

    Hi,
    I like to display data from a CSV file in iWeb page if a date value from CSV file matches todays value from the system. Here is an example.
    CSV data values
    01/20/2011,Sunny,87
    01/21/2011,Cloudy,100
    01/22/2011,Rainy,60
    If today's date value is 01/21/2011 the page should display 01/21/2011 Cloudy 100 in a tabular format.
    Appreciate your help in providing HTML code for this issue.
    Thanks

    I suspect there is a soft return in the excel database somewhere that can't be seen. Take the csv/txt file into notepad and look for a line that starts oddly compared to the others.
    I haven't had luck removing soft returns from excel files so I do this a rather odd way. I take the excel file into InDesign as a table, and then use find/change to replace any soft returns with nothing, then convert the text to table and then export the text out again by going export, and selecting text from the dropdown menu.
    For my money, I always save tab delimited text files from excel so that if a field does contain commas, it doesn't "trick" indesign into thinking a new field is beginning or not... instead the field delimiters are tabs and they are unlikely to have been used in the excel database.
    If you do choose to use this indesign import method of mine to clean up the database, i also noticed two things in your screengrab: first was that some fields have spaces at the start of the text... easy enough to fix with a GREP that looks for ^\s (start of a sentence followed by a space) and replace with nothing. The second thing is the T&C field that all entries (at least in the screengrab) all start the same – if all entries in the database start the same, couldn't that line be in the indesign file? Its only a small detail I know.

  • How to load the data from .csv file to oracle table???

    Hi,
    I am using oracle 10g , plsql developer. Can anyone help me in how to load the data from .csv file to oracle table. The table is already created with the required columns. The .csv file is having about 10lakh records. Is it possible to load 10lakh records. can any one please tell me how to proceed.
    Thanks in advance

    981145 wrote:
    Can you tell more about sql * loader??? how to know that utility is available for me or not??? I am using oracle 10g database and plsql developer???SQL*Loader is part of the Oracle client. If you have a developer installation you should normally have it on your client.
    the command is
    sqlldrType it and see if you have it installed.
    Have a look also at the FAQ link posted by Marwin.
    There are plenty of examples also on the web.
    Regards.
    Al

Maybe you are looking for

  • Purchase order for MM-

    Hi , I have modified standard 'MEDRUCK' script form. let it be script1. Now i have to do another script say script2 , where in if any body tries to print from me23n transaction, they should be blocked. they should be able to only print preview ( no p

  • How do I install a snow Leopard on a second internal drive

    I just installed a 2nd internal drive on my MacPro (1,1). I have Snow Leopard (SL) already installed and running on my primary boot drive that works well. I am looking to make the second drive a mavericks boot drive so I can switch back and forth bet

  • Event Handling Help needed please.

    Hi, I am writing a program to simulate a supermarket checkout. I have done the GUI and now i am trying to do the event handling. I have come up against a problem. I have a JCombobox called prodList and i have added an string array to it called prods

  • Questions about emailing 5 mg file using Stuffit

    My questions are: Is stuffit compatible with MS i.e. can you unzip it using MS software? If not is there a better one like WinZip? Thanks LarryB

  • Unable to transfer ebooks on my binatone anymore

    Can anyone explain why i cannot transfer ebooks from my laptop to my binatone anymore. I was able to do it  march 2014.