Huge CSV File

I'm trying to read in the first double value of every line in a gigantic CSV file in an efficient manner. Using "readLine()" creates a gigantic string that is disgaurded right after the first value is parsed out. This seems incredibly inefficient, and it's taking about 30 minutes just to complete an analysis of this file. Is there any way to just grab the values out?
I've tried just reading one byte at a time, grabbing the first value until I reach a comma, and then reading in a byte until I reach a end of line. But this has it's obvious disadvantages of reading byte by byte, and an inherant slowness to it.
Any solutions to this? Anything in NIO?
-Jason Thomas.

this:
http://ostermiller.org/utils/CSVLexer.html
Works nicely.

Similar Messages

  • Schema advice for huge csv file

    Guys, I need an advice: Huge csv file (500 millions rows) to load in a table and I did. But now I need to alter the columns (they came all as varchar(50)). I'm just change one column and it's taking age...what kind of schema should I adopt? So far I applied
    a simple data flow but I am wondering if I should do something like:
    drop table
    create table (all varchar)
    data flow
    alter table
    No sure about

    Is this a once off/ad-hoc load or something that'll be ongoing/BAU?
    If it's ongoing then Arthur's post is the standard approach.
    create a staging table with varchar(50s) or whatever. Load into that, then from that staging table go into your 'normal' table that has the correct column types.
    If it's a once off, what I'd do is create a new table with the correct data types. Do a bulk insert from your table with 500mil rows.
    then drop the old table and rename the new table.
    Converting the columns in your 500mil table one by one is going to take a very long time, it'll be faster to do one bulk insert into a table with the correct schema
    Jakub @ Adelaide, Australia Blog

  • While uploading a CSV file i am getting error

    Hi All,
    I am trying to upload a huge CSV file and now it is giving below error:
    ORA-01653: unable to extend table ADT.SCG_RECIEVABLES2 by 128 in tablespace APEX_1711125608495793205.
    Any expert,please guide what type of error is it?and what i need to do to solve it.
    workspace name: ADT
    target table is SCG_RECIEVABLES2.
    Shyam

    Hi,
    Table space is full. Contact your DBA or extend tablespace.
    First hit from Googling by that ORA error
    http://www.dbmotive.com/oracle_error_codes.php?errcode=01653
    BR,
    Jari

  • Power shell search one column of a csv file and replace text in that column

    I have a huge CSV file.
    Column J has number which represents states.
    I would like to search through column J of output.csv and replace the number with the state name.
         J
    State
    233
    219
    233
    210
    Becomes
       J
    State
    NC
    TN
    NC
    SC
    I have tried several methods that seem to do noting or erase everything in my csv file or at best searches every column and if a phone number has 210 in it, it changes it to SC.
    Can any one point me in the correct location?
    Thanks!
    R White

    Thanks so much
    I gave it a try using this
    Import-Csv C:\temp\outfile.csv| ForEach-Object {
    if ($_.State.tostring() -like '256') { $_.State.tostring().replace('256', 'Somethingelse256')}
    if ($_.State.tostring() -like '257') { $_.State.tostring().replace('257', 'Somethingelse257xx')}
     } | export-csv C:\temp\outfileNEW.csv
    it produced a C:\temp\outfileNEW.csv
    with only a column A that had this all the way down it
    #TYPE System.String
    Length
    16
    16
    16
    16
    16
    16
    16
    R White

  • JDBC wrapper for CSV files?

    I wrote my own method to read in CSV files into a table structure (String[][]). For big CSV files, I added several functionalities to ignore specific data lines that have specific values. All this looks quite similar to a database table that I do a select * for and reduce the resulting rows via specific WHERE clause criterias. So I wonder if there's already such a JDBC wrapper around CSV files?

    Yes. I believe the JDBC-ODBC bridge can use an Excel URL to read in a CSV. Though don't quote me on that one.
    However, why not simply use your RDBMS data-import utility? You can invoke it from a scheduler or from Runtime.exec(). It should perform MUCH better than middleware for a huge CSV file. If manipulation needs to occur for the data, write it first to a temp table, then manipulate it.
    - Saish

  • How can I make an easy *.CSV file to load into database table

    Hi All,
    I have a huge excel sheet having columns item#, description and qty. The description column sometimes maybe one word name, two word name separated with space or may be , spearated name. I want to write and PL/SQl code which will read this file and load it into database table. Now the *.CSV file is either comma delimited or tab text delmited which both do not solve my issue. Is there any better solution with anyone which can prevent the manual editing to the *.CSV file and I can easily load it to table.
    Your help is appreciated,
    Thanks
    Zahir

    SQL*Loader is probably the fastest method, but since you specifically asked for a PL/SQL method:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:464420312302

  • Read CSV file into a 1-D array

    Hi
    I would like to read a csv file into a cluster of 4 elements which would then be read into a 1-D array.
    My cluster contains a typedef, a double, a boolean, and another typedef.
    Basically it could be seen as:
    Bob Runs, 4, T, Bob
    Mary sits, 5, F, Mary
    Bob Sits, 2, F, Bob
    Mary Runs, 9, T, Mary
    (keeps growing)
    Are there any good examples for what I am trying to put together that I could leverage, or is it better to use a different input file than a csv. I am trying to make my program more flexable and easier to make adjustments even after the executable is created.  My line items seem to be growing exponentially and is getting difficult to manage in the LV window.
    Thanks
    Solved!
    Go to Solution.

    Unless your CSV file is huge, I'd use "Read from Spreadsheet File" with the delimiter set as "," and the type as string.  This will give you a 2D array of strings.  You could then separate out each column of the array, convert to the appropriate data type, and use Index & Bundle Cluster Array to build your array of clusters.  Something like this (except I'm using a string constant in place of reading from the file).

  • I would like to get only specific channels from several .csv files and concatenate into one group.

    Hello,
    I am working with other groups and getting the data in daily .csv files.  When I use the "concatenate groups" script along with a script on importing files, I end up getting a huge file that takes about an hour to concatenate.  In order to reduce the amount of time and memory that this takes, I was hoping that someone could help me modify the script so that I could just list the channel name that I am interested in and concatinating only those channels rather than all of them. 
    For example, if voltage, temperature, pressure and time data are taken daily for 30 days, I would like to import only the temperature and time data (from .csv format) and concatenate into one group.
    I have attached the .vbs files that I use.
    Thanks in advance,
    Alan
    Attachments:
    Import and concatenate files.zip ‏9 KB

    Hi Alan,
    Actually, the feature you're asking for is already in the code of mine that you sent back. Look on line 11 of the main VBScript:    ChannelSet = "" ' "" or "1-" (DataPlugin) or "Sheet1" (EXCEL Wizard) 
    If this "ChannelSet" parameter is set to something other than "" or "1-" then it is used in line 78:
    Call DataFileLoadSel(FilePaths(i), DataPlugin, ChannelSet) 
    Yo can specify the channel indices to load with an expression like this:
    ChannelSet = "[1]/[1],[3]" 
    Let me know if you have further questions,
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Reading  huge xml files in OSB11gR1(11.1.1.6.0)

    Hi,
    I want to read a huge xml file of size 1GB in OSB(11.1.1.6.0)?
    I will be creating a (JCA)file adapter in jdeveloper and importing artifacts to OSB.
    Please let me know the maximum file size that could be handled in OSB?
    Thanks in advance.
    Regards,
    Suresh

    Depends on what you intend to do after reading the file.
    Do you want to parse the file contents and may be do some transformation? Or do you just have to move the file from one place to another for ex. reading from local system and moving to a remote system using FTP?
    If you just have to move the file, I would suggest using JCA File/FTP adapter's Move operation.
    If you have to parse and process the file contents within OSB, then it may be possible depending on the file type and what logic you need to implement. For ex. for very large CSV files you can use JCA File Adapter batching to read a few records at a time.

  • 2.5 GB CSV file as data source for Crystal report

    Hi Experts,
        I  was asked to create a crystal report using crystal report as datasource(CSV file that is pretty huge (2.4Gb)). Could you help with me any doc that expalins the steps mainly with data connectivity.
    Objective is to create Crystal Report using that csv file as data source, save the report as .rpt with the data and send the results to customer to be read with Crystal Reports Viewer or save the results to PDF.
    Please help and suggest me steps as I am new to crystal reports and CSV as source.
    BR, Nanda Kishore

    Nanda,
    The issue of having some records with comma and some with a semi colon will need to be resolved before you can do an import. Assuming that there are no semi colons in any of the text values of the report, you could do a "Find & Replace" to convert the semi colons to commas.
    If find & replace isn't an option, you'll need to get the files separately.
    I've never used the Import Export Wizzard myself. I've always used the BULK INSERT command
    It would look something like this...
    BULK INSERT SQLServerTableName
    FROM 'c:\My_CSV_File.csv'
    WITH (FIELDTERMINATOR = ',')
    This of course implies that your table has the same columns, in the same order as the csv files and that each column is the correct data type to accept the incoming data.
    If you continue to have issues getting your data into SQL Server Express, please post in one of these two forums
    [Transact-SQL|http://social.msdn.microsoft.com/Forums/en-US/transactsql/threads]
    [SQL Server Express|http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/threads]
    The Transact-SQL forum has some VERY knowledgeable people (including MVPs and book authors) posing answers.
    I've never posed to the SQL Server Express but I'm sure they can trouble shoot your issues with the Import Export Wizard.
    If you post in one of them, please copy the post link back to this thread you I can continue to to help.
    Jason

  • Comparing SQL Data Results with CSV file contents

    I have the following scenario that I need to resolve and I'm unsure of how to approach it. Let me explain what I am needing and what I have currently done.
    I've created an application that automatically marks assessments that delegates complete by comparing SQL Data to CSV file data. I'm using C# to build the objects required that will load the data from SQL into a dataset which is then compared to the
    associated CSV file that contains the required results to mark against.
    Currently everything is working as expected but I've noticed that if there is a difference in the number of rows returned into the SQL-based dataset, then my application doesn't mark the items at all.
    Here is an example:
    ScenarioCSV contains 4 rows with 8 columns of information, however, let's say that the delegate was only able to insert 2 rows of data into the dataset. When this happens it marks everything wrong because row 1 in both CSV and dataset files were correct,
    however, row 2 in the dataset holds the results found in row 4 in the CSV file and because of this it is comparing it against row 2 in the CSV file.
    How can I check whether a row, regardless of its order, can be marked as it does exist but not in the same order, so I don't want the delegate to lose marks just because the row data in the dataset is not perfectly in the same order of the row data
    in the CSV file???
    I'm at a loss and any assistance will be of huge help to me. I have implemented a ORDER BY clause in the dataset and ensured that the same order is set in the CSV file. This has helped me for scenarios where there are the right number of rows in the dataset,
    but as soon as there is 1 row that is missing in the dataset, then the marking just doesn't allow for any marks for both rows even if the data is correct.
    I hope I've made sense!! If not, let me know and I will provide a better description and perhaps examples of the dataset data and the csv data that is being compared.
    Thanks in advance....

    I would read the CSV into a datatable using oledb. Below is code I wrote a few weeks ago to do this.
    Then you can compare two datatables by a common primary key (like ID number)
    Below is the webpage to compare two datatables
    http://stackoverflow.com/questions/10984453/compare-two-datatables-for-differences-in-c
    You can find lots of examples by perform following google search
    "c# linq compare two dattatable"
    //Creates a CSVReader Class
    public class CSVReader
    public DataSet ReadCSVFile(string fullPath, bool headerRow)
    string path = fullPath.Substring(0, fullPath.LastIndexOf("\\") + 1);
    string filename = fullPath.Substring(fullPath.LastIndexOf("\\") + 1);
    DataSet ds = new DataSet();
    try
    if (File.Exists(fullPath))
    string ConStr = string.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0}" + ";Extended Properties=\"Text;HDR={1};FMT=Delimited\\\"", path, headerRow ? "Yes" : "No");
    string SQL = string.Format("SELECT * FROM {0}", filename);
    OleDbDataAdapter adapter = new OleDbDataAdapter(SQL, ConStr);
    adapter.Fill(ds, "TextFile");
    ds.Tables[0].TableName = "Table1";
    foreach (DataColumn col in ds.Tables["Table1"].Columns)
    col.ColumnName = col.ColumnName.Replace(" ", "_");
    catch (Exception ex)
    MessageBox.Show(ex.Message);
    return ds;
    jdweng

  • SQL bulk copy from csv file - Encoding

    Hi Experts
    This is the first time I am creating a PowerShell script and it is almost working. I just have some problems with the actual bulk import to SQL encoding from the text file since it replaces
    special characters with a question mark. I have set the encoding when creating the csv file but that does not seem to reflect on the actual bulk import. I have tried difference scenarios with the encoding part but I cannot find the proper solution for that.
    To shortly outline what the script does:
    Connect to Active Directory fetching all user - but excluding users in specific OU's
    Export all users to a csv in unicode encoding
    Strip double quote text identifiers (if there is another way of handling that it will be much appreciated)
    Clear all records temporary SQL table
    Import records from csv file to temporary SQL table (this is where the encoding is wrong)
    Update existing records in another table based on the records in the temporary table and insert new record if not found.
    The script looks as the following (any suggestions for optimizing the script are very welcome):
    # CSV file variables
    $path = Split-Path -parent "C:\Temp\ExportADUsers\*.*"
    $filename = "AD_Users.csv"
    $csvfile = $path + "\" + $filename
    $csvdelimiter = ";"
    $firstRowColumns = $true
    # Active Directory variables
    $searchbase = "OU=Users,DC=fabrikam,DC=com"
    $ADServer = 'DC01'
    # Database variables
    $sqlserver = "DB02"
    $database = "My Database"
    $table = "tblADimport"
    $tableEmployee = "tblEmployees"
    # Initialize
    Write-Host "Script started..."
    $elapsed = [System.Diagnostics.Stopwatch]::StartNew()
    # GET DATA FROM ACTIVE DIRECTORY
    # Import the ActiveDirectory Module
    Import-Module ActiveDirectory
    # Get all AD users not in specified OU's
    Write-Host "Retrieving users from Active Directory..."
    $AllADUsers = Get-ADUser -server $ADServer `
    -searchbase $searchbase -Filter * -Properties * |
    ?{$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com' `
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'}
    Write-Host "Users retrieved in $($elapsed.Elapsed.ToString())."
    # Define labels and get specific user fields
    Write-Host "Generating CSV file..."
    $AllADUsers |
    Select-Object @{Label = "UNID";Expression = {$_.objectGuid}},
    @{Label = "FirstName";Expression = {$_.GivenName}},
    @{Label = "LastName";Expression = {$_.sn}},
    @{Label = "EmployeeNo";Expression = {$_.EmployeeID}} |
    # Export CSV file and remove text qualifiers
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    Write-Host "Removing text qualifiers..."
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Write-Host "CSV file created in $($elapsed.Elapsed.ToString())."
    # DATABASE IMPORT
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data")
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
    $batchsize = 50000
    # Delete all records in AD import table
    Write-Host "Clearing records in AD import table..."
    Invoke-Sqlcmd -Query "DELETE FROM $table" -Database $database -ServerInstance $sqlserver
    # Build the sqlbulkcopy connection, and set the timeout to infinite
    $connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
    $bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
    $bulkcopy.DestinationTableName = $table
    $bulkcopy.bulkcopyTimeout = 0
    $bulkcopy.batchsize = $batchsize
    # Create the datatable and autogenerate the columns
    $datatable = New-Object System.Data.DataTable
    # Open the text file from disk
    $reader = New-Object System.IO.StreamReader($csvfile)
    $columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
    if ($firstRowColumns -eq $true) { $null = $reader.readLine()}
    Write-Host "Importing to database..."
    foreach ($column in $columns) {
    $null = $datatable.Columns.Add()
    # Read in the data, line by line
    while (($line = $reader.ReadLine()) -ne $null) {
    $null = $datatable.Rows.Add($line.Split($csvdelimiter))
    $i++; if (($i % $batchsize) -eq 0) {
    $bulkcopy.WriteToServer($datatable)
    Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
    $datatable.Clear()
    # Add in all the remaining rows since the last clear
    if($datatable.Rows.Count -gt 0) {
    $bulkcopy.WriteToServer($datatable)
    $datatable.Clear()
    # Clean Up
    Write-Host "CSV file imported in $($elapsed.Elapsed.ToString())."
    $reader.Close(); $reader.Dispose()
    $bulkcopy.Close(); $bulkcopy.Dispose()
    $datatable.Dispose()
    # Sometimes the Garbage Collector takes too long to clear the huge datatable.
    [System.GC]::Collect()
    # Update tblEmployee with imported data
    Write-Host "Updating employee data..."
    $queryUpdateUsers = "UPDATE $($tableEmployee)
    SET $($tableEmployee).EmployeeNumber = $($table).EmployeeNo,
    $($tableEmployee).FirstName = $($table).FirstName,
    $($tableEmployee).LastName = $($table).LastName,
    FROM $($tableEmployee) INNER JOIN $($table) ON $($tableEmployee).UniqueNumber = $($table).UNID
    IF @@ROWCOUNT=0
    INSERT INTO $($tableEmployee) (EmployeeNumber, FirstName, LastName, UniqueNumber)
    SELECT EmployeeNo, FirstName, LastName, UNID
    FROM $($table)"
    try
    Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query $queryUpdateUsers
    Write-Host "Table $($tableEmployee) updated in $($elapsed.Elapsed.ToString())."
    catch
    Write-Host "An error occured when updating $($tableEmployee) $($elapsed.Elapsed.ToString())."
    Write-Host "Script completed in $($elapsed.Elapsed.ToString())."

    I can see that the Export-CSV exports into ANSI though the encoding has been set to UNICODE. Thanks for leading me in the right direction.
    No - it exports as Unicode if set to.
    Your export was wrong and is exporting nothing. Look closely at your code:
    THis line exports nothing in Unicode"
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    There is no input object.
    This line converts any file to ansi
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Set-Content defaults to ANSI so the output file is converted.
    Since you are just dumping into a table by manually building a recorset why not just go direct.  You do not need a CSV.  Just dump theresults of the query to a datatable.
    https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
    This script dumps to a datatable object which can now be used directly in a bulkcopy.
    Here is an example of how easy this is using your script:
    $AllADUsers = Get-ADUser -server $ADServer -searchbase $searchbase -Filter * -Properties GivenName,SN,EmployeeID,objectGUID |
    Where{
    $_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com'
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'
    } |
    Select-Object @{N='UNID';E={$_.objectGuid}},
    @{N='FirstName';Expression = {$_.GivenName}},
    @{N='LastName';Expression = {$_.sn}},
    @{N=/EmployeeNo;Expression = {$_.EmployeeID}} |
    Out-DataTable
    $AllDUsers is now a datatable.  You can just upload it.
    ¯\_(ツ)_/¯

  • Comparing 3 CSV Files and generating output to 4th One..

    Hi,
    I was trying to compare 3 different CSV files using the common field EmplID and generate output with the combination of all the CSV's. The fields in the CSV are below
    CSV1 : EmplID,HName,Name,PreferredName,Location,Department
    CSV2 : HName,EmplID,first_name,last_name,email
    CSV3 : Emplid,Extension
    I would like to generate the output CSV as below..
    OutputCSV :EmplID,Hname,Name,PreferredName,Location,Department,first_name,last_name,email,Extension
    The below script works but as it is comparing the data by row by row, it takes huge time to complete.. Can anybody suggest how can i improve the performance of the same... 
    $CSV1 = Import-CSV "Abc.CSV"
    $CSV2 = Import-CSV "DEF.CSV"
    $CSV3 = Import-CSV "GHI.CSV"
    $Merged = ForEach($Record in $CSV1){
    Add-Member -InputObject $Record -NotePropertyName 'first_name' -NotePropertyValue ($CSV2|Where{$_.EmplID -eq $Record.EmplID}|Select -Expand first_name)
    Add-Member -InputObject $Record -NotePropertyName 'last_name' -NotePropertyValue ($CSV2|Where{$_.EmplID -eq $Record.EmplID}|Select -Expand last_name)
    Add-Member -InputObject $Record -NotePropertyName 'email' -NotePropertyValue ($CSV2|Where{$_.EmplID -eq $Record.EmplID}|Select -Expand email)
    Add-Member -InputObject $Record -NotePropertyName 'Extension' -NotePropertyValue ($CSV3|Where{$_.EmplID -eq $Record.EmplID}|Select -Expand Extension) -PassThru
    $Merged | Export-CSV C:\Path\To\New.CSV -NoTypeInfo

    Hi RP,
    you can do this by creating a dictionary that uses the Employee ID as key. This allows you to iterate over each Csv only once and add values to the correct object each time. Didn't test it, but this ought to do the trick:
    $CSV1 = Import-CSV "Abc.CSV"
    $CSV2 = Import-CSV "DEF.CSV"
    $CSV3 = Import-CSV "GHI.CSV"
    $Hash = @{ }
    foreach ($Record in $CSV1)
    $Hash[$Record.EmplID] = $Record | Select EmplID, Hname, Name, PreferredName, Location, Department, first_name, last_name, email, Extension
    foreach ($Record in $CSV2)
    try
    $Hash[$Record.EmplID].first_name = $Record.first_name
    $Hash[$Record.EmplID].last_name = $Record.last_name
    $Hash[$Record.EmplID].email = $Record.email
    catch
    Write-Warning "[Csv2]Failed to process $($Record.emplID): $($_.Exception.Message)"
    foreach ($Record in $CSV3)
    try
    $Hash[$Record.EmplID].Extension = $Record.Extension
    catch
    Write-Warning "[Csv3]Failed to process $($Record.emplID): $($_.Exception.Message)"
    $Hash.Values | Export-CSV C:\Path\To\New.CSV -NoTypeInfo
    Cheers,
    Fred
    There's no place like 127.0.0.1

  • Opening CSV file in ReadOnly while writing data to it.

    I am writing huge ammount of data in CSV file. If i open the excel file in 'Read only' or 'Notify' mode, my java program gives Exception as
    java.io.IOException: The process cannot access the file because another process has locked a portion of the file
    This should be case as opening excel is not locking any thing for writing.
    What i am doing is some thing like this,
    String wd = System.getProperty("user.dir");
    JFileChooser fc = new JFileChooser(wd);
    int rc = fc.showDialog(null, "Save File As");
                   if (rc == JFileChooser.APPROVE_OPTION)
                        File file = fc.getSelectedFile();
                        String strNewFileName = file.getAbsolutePath() + "." + "csv";
                        File newfile = new File(strNewFileName);
                        file.renameTo(newfile);
                        try {
                             Writer output = new BufferedWriter(new FileWriter(newfile));
                             // fetch data from database
                             output.write(TableHeader.toString()+"\t\n");
    output.close();
    }catch(....
    What should be cause for this?
    Appriciate your help..
    Edited by: charuta on Dec 10, 2008 3:40 AM

    You can't have a file open in two processes in windows, ever... it's a facet of windows "simple file sharing".... which is not simple, and does not allow files to be shared.
    It is my considered opinion that Bill Gates should be publicly flogged to death with a fluffy pink shoelace.
    Cheers. Keith.

  • How can i update an existing item in sap using CSV file?

    Hi,
    i am trying to update an existing Item in SAP using a CSV file.
    in the message log i get an error message that the item already exists.
    what should i do in order to update the existing record?
    Thanks, Udi

    Hi..........
    I would sugest you to use Tab delimited file and choose proper option in order to update the itsm master in DTW......
    Regards,
    Rahul

Maybe you are looking for

  • Copy Update Rule from Business Content

    Hi All, Currently I have activated some standard BI Content E.g. InfoCube 0FIAR_C03, DSO 0FIAR_O03, InfoSource 0FI_AR_4 and all the corresponding update rules. Now I have copied both InfoCube and DSO into another customized version E.g. ZAR_C03 and Z

  • Installation of SAP Dialog Instance on Window for SAP R/3 46C

    I am getting following error Info: R3SAPSYSTEMSERVICE_NT_IND SyCoprocessCreate 2 1002 Creating coprocess E:\usr\sap\DQ1\D10\exe\SAPSTARTSRV.EXE ... Info: R3SAPSYSTEMSERVICE_NT_IND ExecuteDo 2 1002 Test call to Service failed: 80070006 E_HANDLE: The h

  • Mysql and jsp PLZ PLZ PLZ HELP ME

    Hi >>>>> I want to as if there any problem with the fallowing code???? =========================================================== DriverManager.registerDriver(driver); Class.forName("org.gjt.mm.mysql.Driver").newInstance(); conn = DriverManager.getC

  • Connecting to remote servers

    I need to connect via RDP to some windows servers that we have. Currently with my laptop, I connect with Putty then open an RDP session. Since the ipad seems to only want to run one app at a time, How do I get an SSH RDP session established with my w

  • How to remedy a 3343 DRMErrorEvent error code

    [ Background ] The error code, according to the Flash Player Runtime Error Codes list, describes this error as "Internal Error." [ Root Cause ] Typically, the cause of this sort of error is that the Adobe Access DRM Module on your client has become c