SQL bulk copy from csv file - Encoding

Hi Experts
This is the first time I am creating a PowerShell script and it is almost working. I just have some problems with the actual bulk import to SQL encoding from the text file since it replaces
special characters with a question mark. I have set the encoding when creating the csv file but that does not seem to reflect on the actual bulk import. I have tried difference scenarios with the encoding part but I cannot find the proper solution for that.
To shortly outline what the script does:
Connect to Active Directory fetching all user - but excluding users in specific OU's
Export all users to a csv in unicode encoding
Strip double quote text identifiers (if there is another way of handling that it will be much appreciated)
Clear all records temporary SQL table
Import records from csv file to temporary SQL table (this is where the encoding is wrong)
Update existing records in another table based on the records in the temporary table and insert new record if not found.
The script looks as the following (any suggestions for optimizing the script are very welcome):
# CSV file variables
$path = Split-Path -parent "C:\Temp\ExportADUsers\*.*"
$filename = "AD_Users.csv"
$csvfile = $path + "\" + $filename
$csvdelimiter = ";"
$firstRowColumns = $true
# Active Directory variables
$searchbase = "OU=Users,DC=fabrikam,DC=com"
$ADServer = 'DC01'
# Database variables
$sqlserver = "DB02"
$database = "My Database"
$table = "tblADimport"
$tableEmployee = "tblEmployees"
# Initialize
Write-Host "Script started..."
$elapsed = [System.Diagnostics.Stopwatch]::StartNew()
# GET DATA FROM ACTIVE DIRECTORY
# Import the ActiveDirectory Module
Import-Module ActiveDirectory
# Get all AD users not in specified OU's
Write-Host "Retrieving users from Active Directory..."
$AllADUsers = Get-ADUser -server $ADServer `
-searchbase $searchbase -Filter * -Properties * |
?{$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com' `
-and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'}
Write-Host "Users retrieved in $($elapsed.Elapsed.ToString())."
# Define labels and get specific user fields
Write-Host "Generating CSV file..."
$AllADUsers |
Select-Object @{Label = "UNID";Expression = {$_.objectGuid}},
@{Label = "FirstName";Expression = {$_.GivenName}},
@{Label = "LastName";Expression = {$_.sn}},
@{Label = "EmployeeNo";Expression = {$_.EmployeeID}} |
# Export CSV file and remove text qualifiers
Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
Write-Host "Removing text qualifiers..."
(Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
Write-Host "CSV file created in $($elapsed.Elapsed.ToString())."
# DATABASE IMPORT
[void][Reflection.Assembly]::LoadWithPartialName("System.Data")
[void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
$batchsize = 50000
# Delete all records in AD import table
Write-Host "Clearing records in AD import table..."
Invoke-Sqlcmd -Query "DELETE FROM $table" -Database $database -ServerInstance $sqlserver
# Build the sqlbulkcopy connection, and set the timeout to infinite
$connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
$bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
$bulkcopy.DestinationTableName = $table
$bulkcopy.bulkcopyTimeout = 0
$bulkcopy.batchsize = $batchsize
# Create the datatable and autogenerate the columns
$datatable = New-Object System.Data.DataTable
# Open the text file from disk
$reader = New-Object System.IO.StreamReader($csvfile)
$columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
if ($firstRowColumns -eq $true) { $null = $reader.readLine()}
Write-Host "Importing to database..."
foreach ($column in $columns) {
$null = $datatable.Columns.Add()
# Read in the data, line by line
while (($line = $reader.ReadLine()) -ne $null) {
$null = $datatable.Rows.Add($line.Split($csvdelimiter))
$i++; if (($i % $batchsize) -eq 0) {
$bulkcopy.WriteToServer($datatable)
Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
$datatable.Clear()
# Add in all the remaining rows since the last clear
if($datatable.Rows.Count -gt 0) {
$bulkcopy.WriteToServer($datatable)
$datatable.Clear()
# Clean Up
Write-Host "CSV file imported in $($elapsed.Elapsed.ToString())."
$reader.Close(); $reader.Dispose()
$bulkcopy.Close(); $bulkcopy.Dispose()
$datatable.Dispose()
# Sometimes the Garbage Collector takes too long to clear the huge datatable.
[System.GC]::Collect()
# Update tblEmployee with imported data
Write-Host "Updating employee data..."
$queryUpdateUsers = "UPDATE $($tableEmployee)
SET $($tableEmployee).EmployeeNumber = $($table).EmployeeNo,
$($tableEmployee).FirstName = $($table).FirstName,
$($tableEmployee).LastName = $($table).LastName,
FROM $($tableEmployee) INNER JOIN $($table) ON $($tableEmployee).UniqueNumber = $($table).UNID
IF @@ROWCOUNT=0
INSERT INTO $($tableEmployee) (EmployeeNumber, FirstName, LastName, UniqueNumber)
SELECT EmployeeNo, FirstName, LastName, UNID
FROM $($table)"
try
Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query $queryUpdateUsers
Write-Host "Table $($tableEmployee) updated in $($elapsed.Elapsed.ToString())."
catch
Write-Host "An error occured when updating $($tableEmployee) $($elapsed.Elapsed.ToString())."
Write-Host "Script completed in $($elapsed.Elapsed.ToString())."

I can see that the Export-CSV exports into ANSI though the encoding has been set to UNICODE. Thanks for leading me in the right direction.
No - it exports as Unicode if set to.
Your export was wrong and is exporting nothing. Look closely at your code:
THis line exports nothing in Unicode"
Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
There is no input object.
This line converts any file to ansi
(Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
Set-Content defaults to ANSI so the output file is converted.
Since you are just dumping into a table by manually building a recorset why not just go direct.  You do not need a CSV.  Just dump theresults of the query to a datatable.
https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
This script dumps to a datatable object which can now be used directly in a bulkcopy.
Here is an example of how easy this is using your script:
$AllADUsers = Get-ADUser -server $ADServer -searchbase $searchbase -Filter * -Properties GivenName,SN,EmployeeID,objectGUID |
Where{
$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com'
-and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'
} |
Select-Object @{N='UNID';E={$_.objectGuid}},
@{N='FirstName';Expression = {$_.GivenName}},
@{N='LastName';Expression = {$_.sn}},
@{N=/EmployeeNo;Expression = {$_.EmployeeID}} |
Out-DataTable
$AllDUsers is now a datatable.  You can just upload it.
¯\_(ツ)_/¯

Similar Messages

  • First Row Record is not inserted from CSV file while bulk insert in sql server

    Hi Everyone,
    I have a csv file that needs to be inserted in sql server. The csv file will be format will be like below.
    1,Mr,"x,y",4
    2,Mr,"a,b",5
    3,Ms,"v,b",6
    While Bulk insert it coniders the 2nd column as two values (comma separte) and makes two entries .So i used filelterminator.xml.  
    Now, the fields are entered into the column correctly. But now the problem is, the first row of the csv file is not reading in sql server. when i removed the  terminator,  i can get the all records. But i must use the above code terminator. If
    am using means, am not getting the first row record.
    Please suggests me some solution.
    Thanks,
    Selvam

    Hi,
    I have a csv file (comma(,) delimited) like this which is to be insert to sql server. The format of the file when open in notepad like below:
    Id,FirstName,LastName,FullName,Gender
    1,xx,yy,"xx,yy",M
    2,zz,cc,"zz,cc",F
    3,aa,vv,"aa,vv",F
    The below is the bulk insert query which is used for insert above records,
    EXEC(BULK INSERT EmployeeData FROM '''+@FilePath+'''WITH
    (formatfile=''d:\FieldTerminator.xml'',
    ROWTERMINATOR=''\n'',
    FIRSTROW=2)'
    Here, I have used format file for the "Fullname" which has comma(,) within the field. The format file is:
    The problem is , it skip the first record (1,xx,yy,"xx,yy",M) when i use the format file. When i remove the format file from the query, it takes all the records but the "fullName" field makes the problem because of comma(,) within the
    field. So i must use the format file to handle this. So please suggest me , why the first record skipped always when i use the above format file.
    If i give the "FirstRow=1" in bulk insert, it shows the "String or binary data would be truncated.
    The statement has been terminated." error. I have checked the datatype length.
    Please update me the solution.
    Regards,
    Selvam. M

  • Using sql bulk copy throwing exception -The given value of type String from the data source cannot be converted to type int of the specified target column

    Hi All,
    I am reading notepads files and inserting data in sql tables from the notepad-
    while performing sql bulk copy on this line it throws exception - "bulkcopy.WriteToServer(dt); -"data type related(mentioned in subject )".
    Please go through my  logic and tell me what to change to avoid this error -
    public void Main()
    Dts.TaskResult = (int)ScriptResults.Success;
    string[] filePaths = Directory.GetFiles(@"C:\Users\jainruc\Desktop\Sudhanshu\master_db\Archive\test\content_insert\");
    for (int k = 0; k < filePaths.Length; k++)
    string[] lines = System.IO.File.ReadAllLines(filePaths[k]);
    //table name needs to extract after = sign
    string[] pathArr = filePaths[0].Split('\\');
    string tablename = pathArr[9].Split('.')[0];
    DataTable dt = new DataTable(tablename);
    |
    string[] arrColumns = lines[1].Split(new char[] { '|' });
    foreach (string col in arrColumns)
    dt.Columns.Add(col);
    for (int i = 2; i < lines.Length; i++)
    string[] columnsvals = lines[i].Split(new char[] { '|' });
    DataRow dr = dt.NewRow();
    for (int j = 0; j < columnsvals.Length; j++)
    //Console.Write(columnsvals[j]);
    if (string.IsNullOrEmpty(columnsvals[j]))
    dr[j] = DBNull.Value;
    else
    dr[j] = columnsvals[j];
    dt.Rows.Add(dr);
    SqlConnection conn = new SqlConnection();
    conn.ConnectionString = "Data Source=UI3DATS009X;" + "Initial Catalog=BHI_CSP_DB;" + "User Id=sa;" + "Password=3pp$erv1ce$4";
    conn.Open();
    SqlBulkCopy bulkcopy = new SqlBulkCopy(conn);
    bulkcopy.DestinationTableName = dt.TableName;
    bulkcopy.WriteToServer(dt);
    conn.Close();
    Issue 1:-
    I am reading notepad: getting all column and values in my data table now while inserting for date and time or integer field i need to do explicit conversion how to write for specific column before bulkcopy.WriteToServer(dt);
    Issue 2:- Notepad does not contains all columns nor in specific sequence in that case i can add few column ehich i am doing now but the issue is now data table will add my columns + notepad columns and while inserting how to assign in perticular colums?
    sudhanshu sharma Do good and cast it into river :)

    Hi,
    I think you'll have to do an explicit column mapping if they are not in exact sequence in both source and destination.
    Have a look at this link:
    https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopycolumnmapping(v=vs.110).aspx
    Good Luck!
    Kaur.
    Please mark as answer if this resolves your issue.

  • SQL* Loader Loading specific column from CSV file to the table

    Dear All,
    Iam Loading specific column from .CSV file to the oracle table.
    Could pls help how i can load only that cols into the table
    Eg: CSV file having id, Frst_name,Last_name, Address,Phone,Insurance etc
    out of this I want to load only Frst_name,Last_name columns to oracle table say fname and lname.
    Thanks in Adv.
    Junu

    Lily,
    I made some changes to your table def but you will get the idea
    -- Table EMPLOYEE
    CREATE TABLE EMPLOYEE
      EMPID        NUMBER                           NOT NULL,
      EMPNICKNAME  VARCHAR2(10 BYTE)                    NULL,
      FNAME        VARCHAR2(20 BYTE)                NOT NULL,
      MI           VARCHAR2(20 BYTE)                    NULL,
      LNAME        VARCHAR2(20 BYTE)                NOT NULL,
      FULLNAME     VARCHAR2(20 BYTE)                NOT NULL,
      HIREDATE     DATE                             DEFAULT SYSDATE               NOT NULL
    --  data file employee.dat
    1,amy,b,amy b
    2,cindy,d,cindy d
    3,eric,f,eric f
    4,gary,h,gary
    -- Control file : Employee.ctl ( you can use truncate, replace or append , see sqlldr for more options)
    load data
    Truncate into table employee
    fields terminated by ","
    optionally enclosed by '"'
    TRAILING NULLCOLS
    empId INTEGER EXTERNAL,
    FName char(20),
    LName char(20),
    FullName char(30)
    now to load use following or you can speicify infile in control fle
    sqlldr username/passowrd control=employee.ctl  data=employee.dat log=employee.log
    {code}
    Hope this help.
    Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Import From CSV File statement runs forever, no error, does not finish

    Hello,
    I am trying to import a CSV file in a JAVA program, with the following statement:
    IMPORT FROM CSV FILE '/debug/testdatabase/FILE.csv'
    INTO "JOSEPH"."TEST_TABLE"
    WITH COLUMN LIST IN FIRST ROW
    RECORD DELIMITED BY '\n'
    FIELD DELIMITED BY '\t' ERROR LOG '/debug/testdatabase/file.err'
    THREADS 10
    BATCH 10000
    I have two HANA instances on different machines A and B:
    Both machines run HANA version 1.00.74.00.389160 (NewDB100_REL), while the OS is
    SUSE Linux Enterprise Server 11.1 on machine A and
    SUSE Linux Enterprise Server 11.2 on machine B.
    The statement above runs fine on machine A and the rows are imported properly from JAVA as well as when executed from HANA Studio SQL console.
    If I copy the file to machine B and try the exact same statement with the same file, it does not finish (neither from JAVA nor from HANA Studio SQL console). There is no error either. It cannot be cancelled, only a HANA restart stops the statement. Also the sample file I use has only 2 rows, and memory does not seem to be a problem.
    I seem to have a similar problem to the one described here, but the answers there do not help me: http://scn.sap.com/thread/3396582 I specified the record delimiter, and I used a python script to check for any strange characters that are not supposed to be there, but didn't find any.
    If I copy the file to my windows PC and use the "File Menu -> Import -> SAP Hana Content -> Data from Local file" function, it imports the file correctly into B, but I need to be able to do it from JAVA.
    Machine A administration view:
    Machine B administration view:
    If you have any idea what might cause this behavior or where I can find more information on this problem please give me a hint.

    Hi Joseph,
    First from the pics, the revision of your SAP HANA instance is 73 instead of 74. Since I have no identical environment, I cannot test it for you. But can you try the simplest scenario? You can create a table with only one column table try to import a CSV file with only one row.
    Best regards,
    Wenjun

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • How to load the data from .csv file to oracle table???

    Hi,
    I am using oracle 10g , plsql developer. Can anyone help me in how to load the data from .csv file to oracle table. The table is already created with the required columns. The .csv file is having about 10lakh records. Is it possible to load 10lakh records. can any one please tell me how to proceed.
    Thanks in advance

    981145 wrote:
    Can you tell more about sql * loader??? how to know that utility is available for me or not??? I am using oracle 10g database and plsql developer???SQL*Loader is part of the Oracle client. If you have a developer installation you should normally have it on your client.
    the command is
    sqlldrType it and see if you have it installed.
    Have a look also at the FAQ link posted by Marwin.
    There are plenty of examples also on the web.
    Regards.
    Al

  • Importing users into WGM from csv file issues/crash

    Hi,
    i've been importing user information from csv files into WGM via the +server >  import+ function .
    It worked the first few times but now when i try the import progress bar pops up and promptly disappears without any thing importing.
    i've tried restarts, new admin account, reinstalled  WGM.
    I've also trashed some pref but i don't really know which ones i should be losing.
    The servers an OD master.
    any help would be appreciated.
    as a last resort what do i need to backup/save if i were to format/reinstall osx server? keeping my settings etc.....
    thanks
    paul

    What I did was:
    Exported the user list, to create an XML file in the correct format.
    Using this format, I created a spreadsheet in Excel (sorry Apple), and in the final column I created a field that concatenated the information I wanted in the ':' deliminated format of the previously export XML.
    Then just copy and past via pico into a pure text file and imported that.
    You have to be careful with comments in Passenger, using special characters (';!@#$%^ and others can cause the WGM to fail and crash.

  • Trigger while importing from .csv file

    hey
    i am importing data from .csv file into a table called temp.
    .csv file is having column sales. the data in that also contains some empty filed or N/A or -
    sales coulmn in my table is NUMBER.
    so while importing some column are not able to import.
    how to write a trigger to eliminate '-' or 'N/A' or ' ' values from the sales column before inserting into the table

    It might be easier if the field on temp was a varchar2 field instead?
    The values entered should be loaded into this field and you can then use the trigger to test the values and convert these into numbers or nulls before inserting onto the proper data table.
    Check out the TO_NUMBER() function in SQL for info on how to convert strings to numbers.
    Regards
    Andy

  • Error when executing interface which load data from csv file which has 320

    Hi,
    Can some one provide a resolution for below error:
    I have created an interface which load data from csv file which has 320 columns, to a Synonym which has 320 columns in it
    using LKM File to SQL, IKM Sql Control Append.
    I am getting below error when executing the interface :
    com.sunopsis.tools.core.exception.SnpsSimpleMessageException: ODI-17517: Error during task interpretation. Task: 6 java.lang.Exception: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location> BSF info: Create external table at line: 0 column: columnNo
    at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:485)
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:711)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: org.apache.bsf.BSFException: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location>
    BSF info: Create external table at line: 0 column: columnNo
         at bsh.util.BeanShellBSFEngine.eval(Unknown Source)
         at bsh.util.BeanShellBSFEngine.exec(Unknown Source)
         at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:471)
         ... 11 more
    Text: The application script threw an exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 2 BSF info: Create external table at line: 0 column: columnNo
    out.print("createTblCmd = r\"\"\"\ncreate table ") ;
    out.print(odiRef.getTable("L", "COLL_NAME", "W")) ;
    out.print("<?=(extTabColFormat.getUseView())?\"_ET\":\"\"?>\n(\n\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
              "<?=extTabColFormat.getExtTabDataType(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022[DEST_WRI_DT]\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
         , ",\\n\\t", "","")) ;
    out.print("\n)\nORGANIZATION EXTERNAL\n(\n\tTYPE ORACLE_LOADER\n\tDEFAULT DIRECTORY dat_dir\n\tACCESS PARAMETERS\n\t(\n\t\tRECORDS DELIMITED BY 0x'") ;
    out.print(odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")) ;
    out.print("'\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_CHARACTERSET")) ;
    out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_STRING_SIZE")) ;
    out.print("\n\t\tBADFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.bad'\n\t\tLOGFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.log'\n\t\tDISCARDFILE\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.dsc'\n\t\tSKIP \t\t") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")) ;
    out.print("\n") ;
    if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {out.print("\n\t\tFIELDS\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\tPOSITION([FILE_POS]:[FILE_END_POS])\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    } else {out.print("\n\t\tFIELDS TERMINATED BY x'") ;
    out.print(odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")) ;
    out.print("'\n\t\t") ;
    if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){out.print("\n\t\t") ;
    } else {out.print("OPTIONALLY ENCLOSED BY '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)) ;
    out.print("' AND '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)) ;
    out.print("' ") ;
    }out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    }out.print("\tLOCATION (") ;
    out.print(odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")) ;
    out.print(")\n)\n") ;
    out.print(odiRef.getUserExit("EXT_PARALLEL")) ;
    out.print("\nREJECT LIMIT ") ;
    out.print(odiRef.getUserExit("EXT_REJECT_LIMIT")) ;
    out.print("\n\"\"\"\n \n# Create the statement\nmyStmt = myCon.createStatement()\n \n# Execute the trigger creation\nmyStmt.execute(createTblCmd)\n \nmyStmt.close()\nmyStmt = None\n \n# Commit, just in case\nmyCon.commit()") ;
    ****** ORIGINAL TEXT ******
    createTblCmd = r"""
    create table <%=odiRef.getTable("L", "COLL_NAME", "W")%><?=(extTabColFormat.getUseView())?"_ET":""?>
         <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
              "<?=extTabColFormat.getExtTabDataType(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022[DEST_WRI_DT]\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
         , ",\n\t", "","")%>
    ORGANIZATION EXTERNAL
         TYPE ORACLE_LOADER
         DEFAULT DIRECTORY dat_dir
         ACCESS PARAMETERS
              RECORDS DELIMITED BY 0x'<%=odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")%>'
              <%=odiRef.getUserExit("EXT_CHARACTERSET")%>
              <%=odiRef.getUserExit("EXT_STRING_SIZE")%>
              BADFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.bad'
              LOGFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.log'
              DISCARDFILE     '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.dsc'
              SKIP           <%=odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")%>
    <% if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {%>
              FIELDS
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\tPOSITION([FILE_POS]:[FILE_END_POS])\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%} else {%>
              FIELDS TERMINATED BY x'<%=odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")%>'
              <% if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){%>
              <%} else {%>OPTIONALLY ENCLOSED BY '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)%>' AND '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)%>' <%}%>
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%}%>     LOCATION (<%=odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")%>)
    <%=odiRef.getUserExit("EXT_PARALLEL")%>
    REJECT LIMIT <%=odiRef.getUserExit("EXT_REJECT_LIMIT")%>
    # Create the statement
    myStmt = myCon.createStatement()
    # Execute the trigger creation
    myStmt.execute(createTblCmd)
    myStmt.close()
    myStmt = None
    # Commit, just in case
    myCon.commit().
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:738)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)

    The issue is encountered because the text delimiter used in the source file did not consist of a pair of delimiters.
    Please see support Note [ID 1469977.1] for details.

  • Help Me... How to get Data from CSV File...?

    Hi Everyone..!
    This is yajiv and am working in CS3 Photoshop platform. I know about Java Script. Is it possible to get the data from CSV files. Actually our client use to send us the CSV files which contains a lot of swatch name and reference files in one particular image name.
    Actually how we work on that CSV file is, first we copy file name to search that CSV file. then get the result to paste into layer name. This process continue till the end of swatch.
    Thank in Advance
    -yajiv

    > Is it possible to get the data from CSV files.
    Have you tried searching this forum?

  • Display data from CSV file in iWeb page

    Hi,
    I like to display data from a CSV file in iWeb page if a date value from CSV file matches todays value from the system. Here is an example.
    CSV data values
    01/20/2011,Sunny,87
    01/21/2011,Cloudy,100
    01/22/2011,Rainy,60
    If today's date value is 01/21/2011 the page should display 01/21/2011 Cloudy 100 in a tabular format.
    Appreciate your help in providing HTML code for this issue.
    Thanks

    I suspect there is a soft return in the excel database somewhere that can't be seen. Take the csv/txt file into notepad and look for a line that starts oddly compared to the others.
    I haven't had luck removing soft returns from excel files so I do this a rather odd way. I take the excel file into InDesign as a table, and then use find/change to replace any soft returns with nothing, then convert the text to table and then export the text out again by going export, and selecting text from the dropdown menu.
    For my money, I always save tab delimited text files from excel so that if a field does contain commas, it doesn't "trick" indesign into thinking a new field is beginning or not... instead the field delimiters are tabs and they are unlikely to have been used in the excel database.
    If you do choose to use this indesign import method of mine to clean up the database, i also noticed two things in your screengrab: first was that some fields have spaces at the start of the text... easy enough to fix with a GREP that looks for ^\s (start of a sentence followed by a space) and replace with nothing. The second thing is the T&C field that all entries (at least in the screengrab) all start the same – if all entries in the database start the same, couldn't that line be in the indesign file? Its only a small detail I know.

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • Loading records from .csv file to SAP table via SAP Program

    Hi,
    I have a .csv file with 132,869 records and I am trying to load it to an SAP table with a customized SAP program.
    After executing the program, only 99,999 records are being loaded into the table.
    Is there some setting to define how many records can be loaded into a table? Or what else could be the problem?
    Pls advice.
    Thanks!!!

    hi Arun ,
    A datasource need a extract structure to fetch data .It is nothing but a temp table to hold data.
    First you need to create atable in SE11 with fields coming from CSV file.
    Then you need to write a report program to read you CSV file and populate your table in BW .
    Then you can create a datasource on top of this table .
    After that replicate and load data at PSA and use to upper flow.
    Regards,
    Jaya Tiwari

  • How to refer/store  a vaue from csv file in control file

    Hi,
    Consider the following control file script.
    Load data
    infile 'suv.csv'
         append      into table mast_equipmnet_test
    fields terminated by "," optionally enclosed by '"'     
    TRAILING NULLCOLS
    equipment_id,
    sub_vehicle_type,
    ebiz_carrier_no expression "(select ebiz_carrier_no from mast_carrier where carrier_id=?)",
    licence_no,
    equip_type,
    ebiz_appown_no,
    ebiz_equip_no sequence(1,1)
    here is my csv file
    CABNO,          SUBTYPE     CARRIER_ID               REG_NO,     VEHICLE_TYPE,          EBIZ_APPOWN_NO
    6954,          SUMO,          SWAMY,                         6954,          SUV,                    228
    9183,          SUMO,          SWAMY,                         9183,          SUV,                    228
    3173,          QUALIS,          SWAMY,                         3173,          SUV,                    228
    In my csv file i have carrier_ids which are string values in 3rd column.
    for every carrier_id ,the corresponding ebiz_carrier_no(numeric value) is stored in a master table called "mast_carrier".
    While loading the data i need to fetch the ebiz_carrier_no for each carrier_id from mast_carrier table .
    but here i got strucked in the where clause of select statement.
    I am not able to refer the carrier_id from csv file in where clause.
    can any body tell me how to refere a value from csv file in the select statement of control file script.
    cheers
    RRK

    Sorry..
    "EXPRESSION" is not needed..
    ebiz_carrier_no "(select ebiz_carrier_no
    from mast_carrier
    where carrier_id=:ebiz_carrier_no )",
    <br>
    <br>
    "Tested" as
    <br>
    load data
    infile *
    into table t truncate
    fields terminated by ','
    (id,
    name "(select ename from emp
                       where empno = :name)"
    begindata
    1,7900
    2,7902
    <br>
    QL> select * from t;
           ID NAME
            1 JAMES
            2 FORD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for