Error importing CSV files with 'hidden' characters using External Table

Hi Folks
Bit of a strange one here.
We're well used to using the External Table method of loading data from CSV files into the database but a recent event has presented us with a problem.
We have received some CSV files that 'look' like regular CSV files but Oracle will not load them.
When we examined the CSV file using VIM on a UNIX box we saw the following 'hidden' characters between every regular character in the file.
^@So a string that looks like this when opened in Excel/Wordpad etc
"TEST","TEXT"Looks like this when exmained with VIM
^@"^@T^@E^@S^@T^@"^@,^@"^@T^@E^@X^@T^@"Has anyone come across this before?
Many thanks
Simon Gadd
Oracle 11g 11.2.0.1.0

Hi Simon,
^@ represents the NUL character (0x00).
So, most likely, you've got a Unicode-encoded file.
You'll have to specify the character set in the record specification (and if necessary the byte order mark), for instance :
CREATE TABLE ext_table
  col1 VARCHAR2(10),
  col2 VARCHAR2(10)
ORGANIZATION EXTERNAL
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY dump_dir
  ACCESS PARAMETERS
   RECORDS DELIMITED BY '
' CHARACTERSET 'UTF16'
  FIELDS TERMINATED BY ','
  LOCATION ('dump.csv')
REJECT LIMIT UNLIMITED;http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/et_params.htm#i1009499

Similar Messages

  • Error while fetching data from OWB Client using External Table.

    Dear All,
    I am using Oracle Warehouse Builder 11g & Oracle 10gR2 as repository database on Windows 2000 Server.
    I facing some issue in fetching data from a Flat File using external table from OWB Client.
    I have perform all the steps without any error but when I try to view the data, I got the following error.
    ======================================
    RA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04040: file expense_categories.csv in SOURCE_LOCATION not found
    ORA-06512: at "SYS.ORACLE_LOADER", line 19
    java.sql.SQLException: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04040: file expense_categories.csv in SOURCE_LOCATION not found
    ORA-06512: at "SYS.ORACLE_LOADER", line 19
         at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
         at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1030)
         at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
         at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:774)
         at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:849)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
         at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
         at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:386)
         at oracle.wh.ui.owbcommon.QueryResult.<init>(QueryResult.java:18)
         at oracle.wh.ui.owbcommon.dataviewer.relational.OracleQueryResult.<init>(OracleDVTableModel.java:48)
         at oracle.wh.ui.owbcommon.dataviewer.relational.OracleDVTableModel.doFetch(OracleDVTableModel.java:20)
         at oracle.wh.ui.owbcommon.dataviewer.RDVTableModel.fetch(RDVTableModel.java:46)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerPanel$1.actionPerformed(BaseDataViewerPanel.java:218)
         at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1849)
         at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2169)
         at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:420)
         at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:258)
         at javax.swing.AbstractButton.doClick(AbstractButton.java:302)
         at javax.swing.AbstractButton.doClick(AbstractButton.java:282)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerPanel.executeQuery(BaseDataViewerPanel.java:493)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerEditor.init(BaseDataViewerEditor.java:116)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerEditor.<init>(BaseDataViewerEditor.java:58)
         at oracle.wh.ui.owbcommon.dataviewer.relational.DataViewerEditor.<init>(DataViewerEditor.java:16)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
         at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1412)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1349)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1367)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:869)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:856)
         at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
         at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:461)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:242)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:163)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:157)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:149)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
    ===========================
    In the error it is showing that file expense_categories.csv in SOURCE_LOCATION not found but I am 100% sure that file is very much there.
    Is anybody face the same issue?
    Do we need to configure something before loading data from a flat file from OWB Client?
    Any help would higly appreciable.
    Regards,
    Manmohan Sharma

    Hi Detlef / Gowtham,
    Now I am able to fetch data from flat files from OWB Server as well as OWB Client.
    One way I have achieved as suggested by you
    1) Creating location on the OWB Client
    2) Samples the files at client
    3) Created & Configured external table
    4) Copy all flat files on OWB Server
    5) Updated the location which I created at the client.
    Other way
    1) Creating location on the OWB Client
    2) Samples the files at client
    3) Created & Configured external table
    4) Copied flat files on the sever in same drive & directory . like if my all flat files are on C:\data at OWB Client then I copied flat file C:\data on the OWB Server. But this is feasible for Non-Windows.
    Hence my problem solved.
    Thanks a lot.
    Regards,
    Manmohan

  • Importing CSV file with Data Merge Fails

    Specs
    See pasted text from CSV at http://pastebin.com/mymhugpN
    I am using InDesign CS6 (8.0.1)
    I created the CSV by downloading it from a Google Spreadsheet as a CSV. I confirm with the Terminal that the character encoding is utf-8 usnig the file command.
    Problem detailed
    I am trying to import a CSV file (utf-8) with Data Merge via the Select Data Source... command with Show Import Options checked. When viewing the Data Source Import Options dialog, I set the following options—Delimiter:Comma, Encoding:Unicode, Platform:Macintosh. I leave Preserve Spaces in Data Source unchecked. It fails to import any variables and produces no error message. I have tried other CSV files as well (created TextEdit, Espresso, etc.) and it seems that InDesign will not import any files if Unicode is specified as the encoding, no matter which other options are specified.
    Can anyone else confirm this?
    Importing as ACSII works, but obviously does not display my content correctly.

    Mike is having some trouble posting in this thread (and I am too), but he sent me a PM with what he wanted to say:
    OK. I think I might have a positive answer for you.
    I was getting lost in the upper ASCII characters you showed. In your test file I never could see any--a case of not seeing the trees for the forest.
    Your quote marks are getting dropped in your test file. Now, this may or may not affect other factors but it does in some further testing. I believe ID has an issue with dropping quote marks even in a plain ASCII file if the marks are at the beginning of a sentence and the file is tab delimited. Call it a bug.
    Because of all the commas and quote marks in your simple file, I think you should be exporting from Google Docs' spreadsheet as a tab-delimited file. This exported file has to be opened in a text editor capable of saving it out as a UTF-16 BE (Big Endian) type of file.
    Also, I think you are going to have to use proper quote marks throughout, or change them in the exported tab-delimited file. Best to have a correct source, though.
    Here is your sample ZIPped up. I think it works properly. But then again, I think I might be bleary-eyed by now.
    http://www.wenzloffandsons.com/temp/merge_psalms_utf-16.zip
    Take care, Mike

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Loading a CSV file with Umlaut characters (àáä)

    Hai,
    We are uploading a CSV file though a Custom JSP page built based on Oracle JTF framework.
    The JSP page is loading the data into FND_LOBS table using JTF object, oracle.apps.jtf.amv.ServletUploader.
    The CSV file in the FND_LOBS table stored properly with the umlaut characters.
    Now the JSP page invokes a Java object to read and parse the data. We are selecting the data first into BLOB object and then using the Input Stream Reader to get the data.
    Here is the sample code:
    oraclepreparedstatement = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT FILE_DATA FROM FND_LOBS WHERE FILE_ID = :1 ");
    oraclepreparedstatement.defineColumnType(1, 2004);
    oraclepreparedstatement.setLong(1, <file id>);
    oracleresultset = (OracleResultSet)oraclepreparedstatement.executeQuery();
    blob = (BLOB)oracleresultset.getObject(1);
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream());
    lineReader = new LineNumberReader(inputstreamreader )
    lCSVLine = lineReader.readLine();
    I tried printing the character set used by the InputStreamReader and it returned as ASCII
    Now I tried setting the different character sets to read Umlaut characters(german chars) but nothing has worked.
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream(),"UTF-8");
    Can someone please let me know where and how to set the Character Set to accept the Umlaut characters like àáä?
    Thanks,
    Anji

    Thank you for the quick response.
    Requirement:
    I need to retrive the BLOB object with umlaut characters from database, parse the data with comma delimeter into strings and store in database.
    I am viewing the umlaut data from the database table using TOAD utility tool.
    I tried the same code example provided above but it is not working as expected. The umlaut characters are translated to 'ýýý.
    CODE EXAMPLE:
    Input:
    test_umlaut (sno NUMBER, col1 VARCHAR2(100), col3 BLOB);
    insert into test_umlaut(sno,col3) values(200, utl_raw.cast_to_raw('äöüÄÖÜ' ))
    Note: Verified that the database is showing the umlaut characters on selecting the col3 and storing in a flat file
    --- code
    OraclePreparedStatement oraclepreparedstatement10 = null;
    OracleResultSet rs = null;
    oraclepreparedstatement10 = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT col3 FROM test_umlaut WHERE sno = 200 ");
    oraclepreparedstatement10.defineColumnType(1, 2004);
    rs = (OracleResultSet)oraclepreparedstatement10.executeQuery();
    while(rs.next()) {
    BLOB b = (BLOB)rs.getObject(1);
    InputStream is = b.getBinaryStream();
    InputStreamReader r = new InputStreamReader(is,"utf8");
    BufferedReader br = new BufferedReader(r);
    String line;
    while( (line = br.readLine()) != null) {
         System.out.println(line);
         OraclePreparedStatement oraclepreparedstatement12 = null;
         OracleResultSet oracleresultset12 = null;
         oraclepreparedstatement12 = (OraclePreparedStatement)oracleconnection.prepareStatement(" INSERT INTO test_umlaut(sno,col1) VALUES (300,?) ");
         oraclepreparedstatement12.setString(1,line);
         oraclepreparedstatement12.executeUpdate();
    br.close();
    r.close();
    is.close();
    Output: Verified the output from the database table which is inserted in the loop above.
    select col1 from test_umlaut where sno=300
    ýýý

  • Parsing BLOB (CSV file with special characters) into table

    Hello everyone,
    In my application, user uploads a CSV file (it is stored as BLOB), which is later read and parsed into table. The parsing engine is shown bellow...
    The problem is, that it won't read national characters as Ö, Ü etc., they simply dissapear.
    Is there any CSV parser that supports national characters? Or, said in other words - is it possible to read BLOB by characters (where characters can be Ö, Ü etc.)?
    Regards,
    Adam
      |
      | helper function for csv parsing
      |
      +-----------------------------------------------*/
      FUNCTION hex_to_decimal(p_hex_str in varchar2) return number
      --this function is based on one by Connor McDonald
        --http://www.jlcomp.demon.co.uk/faq/base_convert.html
       is
        v_dec number;
        v_hex varchar2(16) := '0123456789ABCDEF';
      begin
        v_dec := 0;
        for indx in 1 .. length(p_hex_str) loop
          v_dec := v_dec * 16 + instr(v_hex, upper(substr(p_hex_str, indx, 1))) - 1;
        end loop;
        return v_dec;
      end hex_to_decimal;
      |
      | csv parsing
      |
      +-----------------------------------------------*/
      FUNCTION parse_csv_to_imp_table(in_import_id in number) RETURN boolean IS
        PRAGMA autonomous_transaction;
        v_blob_data   BLOB;
        n_blob_len    NUMBER;
        v_entity_name VARCHAR2(100);
        n_skip_rows   INTEGER;
        n_columns     INTEGER;
        n_col         INTEGER := 0;
        n_position    NUMBER;
        v_raw_chunk   RAW(10000);
        v_char        CHAR(1);
        c_chunk_len   number := 1;
        v_line        VARCHAR2(32767) := NULL;
        n_rows        number := 0;
        n_temp        number;
      BEGIN
        -- shortened
        n_blob_len := dbms_lob.getlength(v_blob_data);
        n_position := 1;
        -- Read and convert binary to char
        WHILE (n_position <= n_blob_len) LOOP
          v_raw_chunk := dbms_lob.substr(v_blob_data, c_chunk_len, n_position);
          v_char      := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
          n_temp      := ascii(v_char);
          n_position  := n_position + c_chunk_len;
          -- When a whole line is retrieved
          IF v_char = CHR(10) THEN
            n_rows := n_rows + 1;
            if n_rows > n_skip_rows then
              -- Shortened
              -- Perform some action with the line (store into table etc.)
            end if;
            -- Clear out
            v_line := NULL;
            n_col := 0;
          ELSIF v_char != chr(10) and v_char != chr(13) THEN
            v_line := v_line || v_char;
            if v_char = ';' then
              n_col := n_col+1;
            end if;
          END IF;
        END LOOP;
        COMMIT;
        return true;
      EXCEPTION
         -- some exception handling
      END;

    Uploading CSV files into LOB columns and then reading them in PL/SQL: [It&#146;s|http://forums.oracle.com/forums/thread.jspa?messageID=3454184&#3454184] Re: Reading a Blob (CSV file) and displaying the contents Re: Associative Array and Blob Number of rows in a clob doncha know.
    Anyway, it woudl help if you gave us some basic information: database version and NLS settings would seem particularly relevant here.
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Problem import csv file with SQL*loader and control file

    I have a *csv file looking like this:
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    I want to import this csv file to this table:
    create table artikel (artnr varchar2(10), namn varchar2(25), fp_storlek number, datum date, mtrlid varchar2(5), pris number);
    My controlfile looks like this:
    LOAD DATA
    INFILE 'e:\test.csv'
    INSERT
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    I cant get sql*loader to import the last column(pris) as I want. It ignore my decimal point which in this case is "," and not "." maybe this is the problem. If the decimal point is the problem how can I get oracle to recognize "," as a decimal point??
    the result from the import now, is that a decimal number (37,2) becomes 372 in the table

    Set NLS_NUMERIC_CHARACTERS environment variable at OS level, before running SqlLoader :
    $ cat test.csv
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    $ cat artikel.ctl
    LOAD DATA
    INFILE 'test.csv'
    replace
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:01 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:11 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C          372
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C          332
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C          471
    6 rows selected.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    $ export NLS_NUMERIC_CHARACTERS=',.'
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:41 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:45 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C         37,2
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C         33,2
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C         47,1
    6 rows selected.
    SQL>                                                                            Control file is exactly as yours, I just put replace instead of insert.

  • Imported source file name changed for the external table

    Hi,
    I have external table from flat file. Now the source file name is changed. I can change the file location to point to the new file name, but I can't update the source for the flat file property(as seen under flat file properties,structure tab. the column "Sampled From"). Is there way to have this update without having to import the file using the new file name again?

    In the Files section in the object navigator find the flat file you want to change the source of.
    Double click on the actual file, and on the General tab change the old file name to the new one.
    The fact that it has been sampled from a different file should not 'disturb' the working of the external table.
    However, if the structure of the new file is different I'd suggest to resample.
    Good luck, Patrick

  • How to import csv-file in Numbers 3.2.2.

    I start using Numbers in stead of Excel. I would like to import csv-files from my bank, but when I open the csv-file in Numbers, everything is imported in the same cell. I composed a testfile: 01/08/2014,”text”,”more text”,”even more text” in Pages, exported to a textfile and changed the extension from .txt in .csv. It did not help, everything was in the same cell. What must be changed to become successful in importing csv-files? I am using Numbers 3.2.2. and an iMac with 2,8 GHz Intel Core i7 processor and 8 GB 1067 MHz DDR3 memory with OS X 10.9.4.
    Thanks, Joan Voormolen

    You can do this using Pages. Without using outside scripts or functions. The Pages Find/Replace function will let you change the delimiter on the data in your file.
    Open the file in Pages. Click Show Invisibles. (this will show you the delimiter used in the file)
    If you see a * as the delimiter, that is a space. Some data files are space delimited. This is a really poor way to delimit numerical data files.
    If you see a fat arrow to the right, the file is Tab delimited
    Obviously, a comma is not a hidden character. Some files are comma delimited
    Whatever else might have been used as a delimiter (for example a semi colon is sometimes used) will be apparent.
    The delimiter should be something that is not used anywhere else in the "data"... text, numbers, etc., you want to delimit. Numbers considers a comma as a valid delimiter for files with the suffix .csv . It considers a tab as a valid delimiter with files with the suffix .txt . It does not consider spaces a valid delimiter in with any file suffix. But some programs use odd delimiters (semi colon, colon, double spaces, etc) as delimiters.
    Use the Find command, then Find/Replace as you need to create that delimiter numbers recognizes. Let's say a semi colon was used as a delimiter. Enter the current delimiter (semi colon)  into the Find box. Pages should highlight all the instances of your entry. Enter a comma (to create comma delimited data file) in the Replace box. You should now see a comma as the delimiter.
    Important Don't forget, any other comma used in the file will also be considered a delimiter. (a comma in 1,000 for example). So check the data. If you see a comma used another way you will want to eliminate that BEFORE you do the "comma as delimiter" replacement. If you have 1,000, do a find/replace with comma as the find, nothing as the replace, first. THEN do the replacement of the semi colon.
    Now comes the "tricky" part from what I could see. You want to save this new file with a suffix of .csv. (Export the file) Numbers will only open a comma delimited file with separated data (by comma) if it's suffix is .csv. Pages only gives you limited export options and puts the file suffix on for you automatically. CSV is not one of the options!
    Choose Text. Pages will name the file .txt. Quit Pages. Go to the file on your desktop (or wherever you saved it). Change the file suffiix from .txt to .csv.
    That's it. Open the file with Numbers. Numbers will create a separate column for everything between the comma's.
    You can use this same method to alter your data file before you import it into Numbers. For example, one file I wanted to import had time=xxx . I only wanted the actual time, not the text attached to it, in my spreadsheet. I did a find/replace with "time=" as the find. A comma as the replace. Even though "time=xxx" is one "word", Pages identified the "time=" within the word to allow the replacement.
    Numbers does not provide a "choose delimiter" function when opening a file. Instead it automatically uses the standard delimiter based on the file suffix. CSV means Comma, so if the file is named .csv it will only look for and use a comma as the delimiter to put the data into separate columns. I believe .txt uses only a tab as the delimiter. In the above example you could find/replace to a Tab. Then Export to Text. And numbers will open the data into columns the way you want, without the extra step of renaming the file on your desktop.
    While some files use a second space (ie two in a row) as a delimiter that's a nasty way to delimit. You always want a specific delimiter that is not used within the data element.
    The above is to import numerical data into separate columns. You could use the same method to manipulate a file that contains text. Let's say you had a file with the suffix .txt. In the file are names and addresses.  John Smith 246 Rose Road . You want Name in one column. Address in another.  Look at all the spaces, which ones should be delimiters which not? Are there any delimiters in the file?
    If you open with Pages and choose show invisibles you can see. You might see John Smith --> 246 Rose Road. (the --> will look like a fat arrow in Pages). Numbers will open this file, IF it has .txt as the suffix, based on the Tab,  with name in one column, Address in another.
    Or you might see John*Smith**246*Rose*Road. Even though the creator of this intended two spaces to be a delimiter Numbers does not recognize that. Numbers will put everything into one column. The fix? In Pages, put a tab between name and address. Find/replace two spaces with Tab. Export, as Text.
    Based on what you see (with show invisible active) in Pages, you can use the Find/Replace function to create the specific delimiter you want (tab or comma). You can use that function to manipulate the file easily so the data you want shows up in separate columns. You may need to get clever to accomplish the unique delimiters. You might even need to do two passes with Find/Replace.
    In the instance above  if there was only one space between each element. (not two as a pseudo delimiter) You could replace all spaces with a tab in Pages. Export as Text.  Numbers will open that file with a column for each word (one for John, one for Smith). Then  "Merge" the two cells (columns) you want to put back together. 

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • Get 'Generic Error' when attempting to import .mov files with transparency?

    I get the 'Generic Error' when attempting to import .mov files with transparency? I'm trying to use mattes with an alpha channel but I cannot import them. I have had this same issue since cs6, now I have it with cc 2014? Any clues would be most appreciated.
    I have uninstalled the DVCPro Codec, but it didn't help.
    I have tried converting the .mov file (which is now automatic in Yosemite) and this creates a ProRes4444 Codec, Again no luck with import.
    I've seen this issue on the web, but never a solution? Anyone?

    Hi Terence,
    I tried iPhoto Library Manager but it could not solve my problem. Opening the iPhoto library I can see that the reference to the pictures are pointing to my old location not to the new one. I considered running a script that would change all pointers since this is basically what is needed (in my case from /Volumes/RAID1/Fotos/XXX to /Volumes/Fotos/XXX). Instead I inserted a soft unix link to make to connection but that did not work. It is referring to the airport disk by the airport name rather than the mounted disk name. Very strange indeed. The problem is maybe not in iPhoto but in OSX? Anyway, maybe the only way out is to take some hours and manually run through all linked photos... cumbersome and annoying!
    Regards,
    Søren

  • Problem importing a CSV file with forward slashes in a column

    I have an Excel csv file of a product database (contains about 6500 products) that contains product codes with forward slahes such as 499/1, 499/3, 499/5.
    These are different sizes of a product and as such have different prices etc.
    When I import the file into Numbers these numbers appear as 499, 166 1/3, and 99 4/5 respectively.
    What seems to be happening is that Numbers is interpreting the forward slash (/) as a divide by command and subsequently performing a calculation on that number on import and hence totally changing the value of the cell so that it is impossible to look up the price related to a product as the product code no longer exists.
    Excel can import these files with no problems, why can't Numbers treat each cell as text and leave it alone on import.
    Is there any way round this or do I have to revert to using Excel for the import of csv files.
    Thanks
    Steve

    I know I'm a bit(!) late (a year) coming to this party, but there is a simple solution, that worked for me, enclose the field in double quotes, and add a single quote before the number:
    Instead of
    499/1,"Super Widget 3",12.34
    do
    "'499/1","Super Widget 3",12.34
    In fact the single (unclosed) quote without double-quotes works as well:
    '499/1,"Super Widget 3",12.34
    I've always found it better to enclose strings with double quotes. This works on loading the file into Mac Numbers and should work with Excel too if that helps. It opens with OpenOffice 4 on the Mac too - if you select "comma" in the "Separated by" checkbox
    I can't remember where I picked this info up....
    Hope this helps someone, albeit late.
    Andy

  • Where to import CSV file using HANA STUDIO version 1.0.26

    Hi Experts,
    In this version HANA studio, i have no idea how to import CSV file ,as it looks different from the HANA academy vedio . hope someone could provide me the method.Tks!
    About Screenshot

    Hi Krishna Tangudu ,
    How to use the IMPORT command as you mentioned?

  • Uploading csv file with number type data to database using apex

    hi
    am trying to upload csv file to oracle database using apex when i select the file using file browser and click on the button.
    my table looks like
    coloumn type
    col1 number(2)
    col2 number(2)
    col3 number(2)
    col4 number(2)
    please tell me the steps i need to follow
    urgent requirement

    This thread should help - Load CSV file into a table when a button is clicked by the user

  • Import CSV file to SP List via Powershell

    Hi there,
    I have a trouble writing a correct function for the script to achieve what I want.
    I export and import a CSV file on a regular
    basis and that way I keep 2 sharepoint lists (external and native) in sync.
    When I import the values, I want Powershell to:
    See if the item exists, if not – add a new item - THAT CURRENTLY WORKS
    If item exists, compare its columns and if there is a new value in CSV, update SP List with updated values just for the items that have changed – DO NOT KNOW HOW TO DO IT
    Editing just the items that have changed would help me keeping version history under control.
    See my script so far. How can I modify it accomplish point 2 above?
    $csvVariable= Import-CSV -path "\\fileshare\folder\export.csv"
    # Destination site collection
    $WebURL = "https://intranet.contoso.com"
    # Destination list name
    $listName = "SP Native List"
    #Get the SPWeb object and save it to a variable
    $webDestination = Get-SPWeb -identity $WebURL
    #Get the SPList object to retrieve the list
    $list = $webDestination.Lists[$listName]
    #Get all items in this list and save them to a variable
    $items = $list.items
    #loop through csv file
    foreach($row in $csvVariable)
    $updated = 0
    #loop through SharePoint list
    foreach($item in $items)
    if($item["EquipmentID"] -eq $row."EquipmentID")
    $updated++
    #add new item if an update wasn't made
    if($updated -eq 0)
    $newItem = $list.items.Add()
    $newItem["UniqueRef"] = $row."UniqueRef"
    $newItem["Safety"] = $row."Safety"
    $newItem["Comment"] = $row."Comment"
    $newItem["Serial"] = $row."Serial"
    $newItem["Vendor"] = $row."Vendor"
    $newItem["Active"] = $row."Active"
    $newItem["Model"] = $row."Model"
    $newItem["Description"] = $row."Description"
    $newItem["EquipmentID"] = $row."EquipmentID"
    $newItem["Code"] = $row."Code"
    $newItem.Update()
    $webDestination.Dispose()

    The problem with word is that it needs the same version of Outlook - 2010 - or you'll get that or similar errors. Import should work because it doesn't need to find outlook - it should know where it is. That points to a problem with the import function.
    See if this helps -
    Open the CSV file in Notepad and use File, Save as then select ANSI from the Encoding dropdown and Save using the existing file name. Also, if you haven't repaired outlook 2013's install, do that in control panel, programs and features. 
    Diane Poremsky [MVP - Outlook]
    Outlook & Exchange Solutions Center
    Outlook Tips
    Subscribe to Exchange Messaging Outlook weekly newsletter

Maybe you are looking for

  • "Display ATI Radeon Family driver stopped responding and has recovered"

    For the past few days my laptop Model HP G61-409CA has been displaying broken black and white horizontal lines and black, white, and coloured random squares on my screen. When it freezes the screen and mouse stops moving and the screen goes blank for

  • FLV plays on every server except the one I need

    I have a flash video, with its swf, html, flv, playback controls swf, and AC_RunActiveContent.js, all referenced with relative links so it shouldn't matter where I put it. I tried it on three different servers and locally. It works just fine on two o

  • Numbers 3.2 can´t open a document

    I can´t open a numbers document. I only get a message telling there´s a new version of Numbers, but there isn´t. I´ve got the latest one, Numbers 3.2. The document was created in March 2014 with the current version of Numbers at that moment, and I ha

  • Message output type

    Hi while I am checking client system  for Message schema/procedure, output type XX1 exists, but this output type not exists in F4. Also it is not available in table T685. I think output type XX1 not defined/created, than How it is available in proced

  • Benefits of using vendor partner function

    Hello Experts: I understand the general use of vendor partner function. But what are the benefits of using vendor partner function? Is it for ease of reporting? Is there a functionality down the line that will not be possible if a vendor partner func