Reading a csv file with a large number of columns

Hello
I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
The file size can be up to 100 MB.
Scan does not seem support the large number of fields.
Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
Thanks
Solved!
Go to Solution.

see if strtok() is useful  http://www.elook.org/programming/c/strtok.html

Similar Messages

  • Old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader

    I made a series of interactive ebooks wiith Adobe Acrobat Pro. They worked fine in the old Adobe Reader. But now the commenting feature does not work properly either in Adobe Reader DC or Adobe Acrobat DC.  When I open the comments, everything slows down or else I get blank pages. The tools are not usable. There seems to be some compatibility issue between what was done in these books with the commenting tools and the new applications. Has anybody experienced the same?

    Hi Anubha,
    Thanks for your reply. With the new Acrobat Reader DC, I have the same problem on two computers, one with Windows 7 and one with Windows 8.1. I downloaded a trial version of Adobe Acrobat DC on the computer with 8,1. Then I found out that I had the same problem with Acrobat DC as I did with Reader DC.
    I made these e-books with Acrobat Adobe Pro 11. I never had a problem with Pro 11 or with previous Readers. Also I have sent these e-books out to other people who had Reader 11 and they had no problems. Now it seems that the books are not usable – at least in the fullest way -- for me or for anyone else using the latest version of Reader.
    Any help would be appreciated. I have put a lot of work into these e-books.
    De : Anubha Goel 
    Envoyé : April-21-15 12:47 AM
    À : michael jaegermeister
    Objet : You have been mentioned by Anubha Goel in Re: old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader in Adobe Community
    You have been mentioned
    by Anubha Goel <https://forums.adobe.com/people/Anubha+Goel?et=notification.mention>  in Re: old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader in Adobe Community - View Anubha Goel's reference to you <https://forums.adobe.com/message/7456900?et=notification.mention#7456900>

  • Reading a CSV file with PL/SQL

    Hello:
    I have a CSV file provided to me by a client. We want to insert the data from this file into a table and perform other processing.
    Is it possible to call a CSV file within PL/SQL? If so, how do I begin?
    Thanks.

    Nawneet wrote:
    CSV file can load it using SQL loader
    go through the below link
    http://www.orafaq.com/wiki/SQL*Loader_FAQ
    SQL*Loader doesn't work very well in PL/SQL.
    Better is External Tables:
    http://www.morganslibrary.org/reference/externaltab.html

  • Problem importing a CSV file with forward slashes in a column

    I have an Excel csv file of a product database (contains about 6500 products) that contains product codes with forward slahes such as 499/1, 499/3, 499/5.
    These are different sizes of a product and as such have different prices etc.
    When I import the file into Numbers these numbers appear as 499, 166 1/3, and 99 4/5 respectively.
    What seems to be happening is that Numbers is interpreting the forward slash (/) as a divide by command and subsequently performing a calculation on that number on import and hence totally changing the value of the cell so that it is impossible to look up the price related to a product as the product code no longer exists.
    Excel can import these files with no problems, why can't Numbers treat each cell as text and leave it alone on import.
    Is there any way round this or do I have to revert to using Excel for the import of csv files.
    Thanks
    Steve

    I know I'm a bit(!) late (a year) coming to this party, but there is a simple solution, that worked for me, enclose the field in double quotes, and add a single quote before the number:
    Instead of
    499/1,"Super Widget 3",12.34
    do
    "'499/1","Super Widget 3",12.34
    In fact the single (unclosed) quote without double-quotes works as well:
    '499/1,"Super Widget 3",12.34
    I've always found it better to enclose strings with double quotes. This works on loading the file into Mac Numbers and should work with Excel too if that helps. It opens with OpenOffice 4 on the Mac too - if you select "comma" in the "Separated by" checkbox
    I can't remember where I picked this info up....
    Hope this helps someone, albeit late.
    Andy

  • Create external table on a CSV file with a variable number of delimiters

    Hi experts,
    I was wondering what the best approach for the following issue.
    I'm trying to create an external table on a file which has in each record (on a new line 'RECORDS DELIMITED BY NEWLINE') a variabel number of delimiters. By example my delimiter is a comma , in the first record I have 100 comma's in the second only 60, the next 80 etc.etc.
    Is there a way to create a external table on this file?
    Thanks in advance.

    alter the source is no option, unfortunalty. But is suggested there would be a sensible workway to handle it. An to solve it I want to pivot i, but first I think I need to have the file in Oracle through a external table.
    Edited by: Jonathan Wisgerhof on Mar 31, 2009 12:21 PM

  • Pivot table with very large number of columns

    Hello,
    here is the situation:
    One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
    Ro --- Co --- Va
    A A 1
    A B 1
    A C 2
    B A 11
    Turned in
    A B C...
    A 1 1 2
    B 11 null null
    To do this I do a query like:
    select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
    The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
    Oracle doesn't like that: ORA-01467: sort key too long
    I like this way has it is getting the result fast.
    I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
    Is there any work around?
    I am using Oracle 9i.
    Tahnk you!

    insert into extracted_data select c, r, v, p from full_data where <specific_clause>
    The values for C are from a query: select disctinct c from extracted_data
    and it is the same for R
    R and C are varchar2(3999)
    I suppose that I can split on the first letter of the C column as:
    SELECT r, low.cola, low.colb, . . ., low.colm,
    high.coln, high.colo, . . ., high.colz
    FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
    SUM(DECODE(c, 'M', va)) colm
    FROM table
    WHERE c like 'A%'
    GROUP BY r) Alpha_A,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'B%'
    GROUP BY r) Alpha_B,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'C%'
    GROUP BY r) Alpha_C
    (SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
    SUM(DECODE(c, 'zZ', va)) colz
    FROM table
    WHERE c like 'Z%'
    GROUP BY r) Alpha_Z
    WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
    I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
    "in real life"
    select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
    (select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
    select r,
    sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
    sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
    from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
    where GRPW.r = GRPC.r
    order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
    Message was edited by:
    maquejp

  • How to read/write .CSV file into CLOB column in a table of Oracle 10g

    I have a requirement which is nothing but a table has two column
    create table emp_data (empid number, report clob)
    Here REPORT column is CLOB data type which used to load the data from the .csv file.
    The requirement here is
    1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
    2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
    eg: empid report_field1 report_field2
    1 x y
    Any help would be appreciated.

    If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
    It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
    To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
    declare
        lt_report_clob CLOB;
        l_max_line_length integer := 1024;   -- set as high as the longest line in your file
        l_infile UTL_FILE.file_type;
        l_buffer varchar2(1024);
        l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
        l_filename varchar2(200) := 'my_file_name.csv';   -- get this from somewhere
    begin
       -- open the file; we assume an Oracle directory has already been created
        l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
        -- initialise the empty clob
        dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
        loop
          begin
             utl_file.get_line(l_infile, l_buffer);
             dbms_lob.append(lt_report_clob, l_buffer);
          exception
             when no_data_found then
                 exit;
          end;
        end loop;
        insert into report_table (emp_id, report)
        values (l_emp_id, lt_report_clob);
        -- free the temporary lob
        dbms_lob.freetemporary(lt_report_clob);
       -- close the file
       UTL_FILE.fclose(l_infile);
    end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
    There is at least one other possibility:
    - you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
    That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
    HTH
    Regards Nigel
    Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file...

  • Data formatting and reading a CSV file without using Sqlloader

    I am reading a csv file to an Oracle table called sps_dataload. The table is structured based on the record type of the data at the beginning of
    each record in the csv file. But the first two lines of the file are not going to be loaded to the table due to the format.
    Question # 1:
    How can I skip reading the first two lines from my csv file?
    Question # 2:
    There are more fields in the csv file than there are number of columns in my table. I know I can add filler as an option, but then there are
    about 150 odd fields which are comma-separated in the file and my table has 8 columns to load from the file. So, do I really have to use filler
    for 140 times in my script or, there is a better way to do this?
    Question # 3:
    This is more of an extension of my question above. The csv file has fields with block quotes - I know this could be achieved in sql loader when we mention Occassionally enclosed by '"'.
    But can this be doable in the insert as created in the below code?
    I am trying to find the "wrap code" button in my post, but do not see it.
    Heres my file layout -
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1
    This is my table structure -
    desc sps_dataload;
    File_Name     Varchar2 (50) Not Null,
    Record_Layer Varchar2 (20) Not Null,     
    Level_Id     Varchar2 (20),
    Desc1          Varchar2 (50),
    Desc2          Varchar2 (50),
    Desc3          Varchar2 (50),
    Desc4          Varchar2 (50)
    Heres my code to do this -
    create or replace procedure insert_spsdataloader(p_filepath IN varchar2,
    p_filename IN varchar2,
    p_Totalinserted IN OUT number) as
    v_filename varchar2(30) := p_filename;
    v_filehandle UTL_FILE.FILE_TYPE;
    v_startPos number; --starting position of a field
    v_Pos number; --position of string
    v_lenstring number; --length of string
    v_record_layer varchar2(20);
    v_level_id varchar2(20) := 0;
    v_desc1 varchar2(50);
    v_desc2 varchar2(50);
    v_desc3 varchar2(50);
    v_desc4 varchar2(50);
    v_input_buffer varchar2(1200);
    v_delChar varchar2(1) := ','
    v_str varchar2(255);
    BEGIN
    v_Filehandle :=utl_file.fopen(p_filepath, p_filename, 'r');
    p_Totalinserted := 0;
    LOOP
    BEGIN
    UTL_FILE.GET_LINE(v_filehandle,v_input_buffer);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    EXIT;
    END;
    -- this will read the 1st field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,1);
    v_lenString := v_Pos - 1;
    v_record_layer := substr(v_input_buffer,1,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 2nd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,2);
    v_lenString := v_Pos - v_startPos;
    v_desc1 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 3rd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,3);
    v_lenString := v_Pos - v_startPos;
    v_desc2 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 4th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,4);
    v_lenString := v_Pos - v_startPos;
    v_desc3 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 5th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,5);
    v_lenString := v_Pos - v_startPos;
    v_desc4 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    v_str := 'insert into table sps_dataload values('||v_filename||','||v_record_layer||','||v_level_id||','||v_desc1||','||v_desc2||','||v_desc3||','||v_desc4||')';
    Execute immediate v_str;
    p_Totalinserted := p_Totalinserted + 1;
    commit;
    END LOOP;
    UTL_FILE.FCLOSE(v_filehandle);
    EXCEPTION
    WHEN UTL_FILE.INVALID_OPERATION THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20051, 'sps_dataload: Invalid Operation');
    WHEN UTL_FILE.INVALID_FILEHANDLE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20052, 'sps_dataload: Invalid File Handle');
    WHEN UTL_FILE.READ_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20053, 'sps_dataload: Read Error');
    WHEN UTL_FILE.INVALID_PATH THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20054, 'sps_dataload: Invalid Path');
    WHEN UTL_FILE.INVALID_MODE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20055, 'sps_dataload: Invalid Mode');
    WHEN UTL_FILE.INTERNAL_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20056, 'sps_dataload: Internal Error');
    WHEN VALUE_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20057, 'sps_dataload: Value Error');
    WHEN OTHERS THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE;
    END insert_spsdataloader;
    /

    Justin, thanks. I did happen to change my pl sql procedure using utl_file.get_file and modifying the instr function based on position of ',' in the file, but my procedure is getting really big and too complex to debug. So I got motivated to use external tables or sql loader as plan b.
    As I was reading more about creating an external table as an efficient way and thus believe I can perhaps build an extern table with my varying selection from the file. But I am still unclear if I can construct my external table by choosing different fields in a record based on a record identifier string value (which is the first field of any record). I guess I can, but I am looking for the construct as to how am I going to use the instr function for selecting the field from the file while creating the table.
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1For example, if I want to create an external table like this -
    CREATE TABLE extern_sps_dataload
    ( record_layer            VARCHAR2(20),
      attr1                   VARCHAR2(20),
      attr2                   VARCHAR2(20),
      attr3                   VARCHAR2(20),
      attr4                   VARCHAR2(20)
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dataload
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE
        BADFILE     dataload:'sps_dataload.bad'
        LOGFILE     dataload:'sps_dataload.log'
        DISCARDFILE dataload:'sps_dataload.dis'
        SKIP 2
        VARIABLE 2 FIELDS TERMINATED BY ',' 
        OPTIONALLY ENCLOSED BY '"' LRTRIM
        MISSING FIELD VALUES ARE NULL
        +LOAD WHEN RECORD_LAYER = 'PROJECT' (FIELD2, FIELD3,FIELD7,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'PRODUCT' (FIELD3,FIELD4,FIELD8,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'SEGMENT' (FIELD1,FIELD2,FIELD4,FIELD5)+    LOCATION ('sps_dataload.csv')
    REJECT LIMIT UNLIMITED;
    {code}
    While I was reading the external table documentation, I thought I could achieve similar things by using position_spec option, but I am not getting behind its parameters. I have highlighted italics in the code above(from LOAD WHEN....FIELDS....), the part I think I am going to use, but not sure of it's construct.
    Thank you for your help!! Appreciate your thoughts on this..
    Sanders.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to read a .csv file(excel format) using Java.

    Hi Everybody,
    I need to read a .csv file(excel) and store all the columns and rows in 2d arrays. Then I can do the rest of the coding myself. I would like it if somebody could post their code to read .csv files over here. The .csv file can have different number of columns and different number of rows every time it is ran. The .csv file is in excel format, so I don't know if that affects the code or not. I would also appreciate it if the classes imported are posted too. I would also like to know if there is a way I can recognize how many rows and columns the .csv file has. I need this urgently so I would be very grateful to anybody who has the solution. Thanks.
    Sincerely Taufiq.

    I used this
    BufferedReader in = new BufferedReader (new FileReader ("test.csv"));
    // and                
    StringTokenizer parser = new StringTokenizer (str, ", ");
                    while (parser.hasMoreTokens () == true)
                    { //crap }works like a charm!

  • How to read columns of a Microsoft Excel .csv file with LabVIEW?

    This should be simple.  Any ideas on how to read an Excel file with csv extension using LabVIEW?
    Thanks!

    Use the Read From SpeadSheet File.VI which you can find on the FILE I/O pallette.
    "Read From Spreadsheet File
    Reads a specified number of lines
    or rows from a numeric text file beginning at a specified character offset and
    converts the data to a 2D, single-precision array of numbers. You optionally can transpose the
    array. The VI opens the file before reading from it and closes it afterwards.
    You can use this VI to read a spreadsheet file saved in text format. This VI
    calls the Spreadsheet String
    to Array function to convert the data."
    The CSV file is essentially a text file which uses commas as delimiters in the files structure.

  • Read a csv file in a PL/SQL routine with the path as a parameter

    hello oracle community,
    System Windows 2008, Oracle 11.1 Database
    I have created a directory in the database and there is no problem to read a csv file from there. Since the files can be at different places, my java developer is asking me, if he can give me the path of the file as a parameter. So far I know I will need a directory to open the file with the UTL_FILE package, but I cannot create diretories for folders, that will be created in the future. Is there any way to solve that problem, so I can use the path as a parameter in my routine ?
    Ikrischer

    BluShadow wrote:
    Ikrischer wrote:
    as I said before, they are dynamic. As usal to most other companies, you get new partners and you will lose partners, so new folders will be added, old ones will be deleted. if you use different directories for different folders, you have more work to handle them.I wouldn't call that dynamic, I would call that business maintenance. Dynamic would mean that the same company is putting their files into different folders each time or on a regular basis. If it's one folder per company then that's not dynamic.the folder structure is changing, I call that dynamic, does not look static to me. but thats just a point of view, it is not important how we call it. lets just agree, the folder structure will change in the future, no matter how many folders one partner will have.
    BluShadow wrote:
    So you're receiving files and you don't even know the format of them? Some of them are not "correct"? Specify that the company must provide the files in the correct format or they will be rejected.
    Computers work on known logic in most cases, not some random guessing game.we know the correct format, but we also know, that the partner wont always send the correct format, period. Cause of that we want to deliver an error log, record 11,15 and 20 is wrong cause of....that is a very usefull service.
    BluShadow wrote:
    The folders should be known.impossible, we do not know the future.
    BluShadow wrote:
    The file format should be known.see above, it has a fixed format, but thats not a gurantee you get the correct format. for me thats a normal situation, it happens every day, you have fixed rules and they will be broken.
    BluShadow wrote:
    The moment you allow external forces to start specifying things how they want it then you're shooting yourself in the foot. That's not good design and not good business practice.maybe we are talking about different things, we do not allow to load data into our system with a wrong format. but we do allow to to send us a wrong format, and even more, we do allow them to send wrong content, but still we dont load it into our system. format and content must be correct to be loaded in the real system, but both we are checking. and with external tables you cannot check a wrong format we a detailed error log.
    Ikrischer

  • Problem reading csv file with special character

    Hai all,
    i have the following problem reading a csv file.
    442050-Operations Tilburg algemeen     Huis in  t Veld, EAM (Lisette)     Gebruikersaccount     461041     Peildatum: 4-5-2010     AA461041                    1     85,92
    when reading this line with FM GUI_UPLOAD this line is split up in two lines in data_tab of the FM,
    it is split up at this character 
    Line 1
    442050-Operations Tilburg algemeen     Huis in
    Line 2
    t Veld, EAM (Lisette)     Gebruikersaccount     461041     Peildatum: 4-5-2010     AA461041                    1     85,92
    Anyone have a idea how to get this in one line in my interbal table??
    Greetz Richard

    Hi Greetz Richard
      Problably character  contains same binary code as line feed + carriage return.
      You can use statement below as workaround.
    OPEN DATASET file FOR INPUT IN TEXT MODE ENCODING UNICODE
    In this case your system must support Unicode encoding
    Kind regards

  • Reading a CSV file from server

    Hi All,
    I am reading a CSV file from server and my internal table has only one field with lenght 200. In the input CSV file there are more than one column and while splitting the file my internal table should have same number of rows as columns of the input record.
    But when i do that the last field in the internal table is appened with #.
    Can somebody tell me the solution for this.
    U can see the my code below.
    data: begin of itab_infile occurs 0,
             input(3000),
          end of itab_infile.
    data: begin of itab_rec occurs 0,
             record(200),
          end of itab_rec.
    data: c_comma(1) value ',',
            open dataset f_name1 for input in text mode encoding default.
            if sy-subrc <> 0.
              write: /, 'FILE NOT FOUND'.
              exit.
            endif.
    do
      read dataset p_ipath into waf_infile.
      split itab_infile-input at c_sep into table itab_rec.
    enddo.
    Thanks in advance.
    Sunil

    Sunil,
    You go not mention the platform on which the CSV file was created and the platform on which it is read.
    A common problem with CSV files created on MS/Windows platforms and read on unix is the end-of-record (EOR) characters.
    MS/Windows usings <CR><LF> as the EOR
    Unix using either <CR> or <LF>
    If on unix open the file using vi in a telnet session to confirm the EOR type.
    The fix options.
    1) Before opening the opening the file in your ABAP program run the unix command dos2unix.
    2) Transfer the file from the MS/Windows platform to unix using FTP using ascii not bin.  This does the dos2unix conversion on the fly.
    3) Install SAMBA and share the load directory to the windows platforms.  SAMBA also handles the dos2unix and unix2dos conversions on the fly.
    Hope this helps
    David Cooper

  • Read a csv file, fill a dynamic table and insert into a standard table

    Hi everybody,
    I have a problem here and I need your help:
    I have to read a csv file and insert the data of it into a standard table.
    1 - On the parameter scrreen I have to indicate the standard table and the csv file.
    2 - I need to create a dynamic table. The same type of the one I choose at parameter screen.
    3 - Then I need to read the csv and put the data into this dynamic table.
    4 - Later I need to insert the data from the dynamic table into the standard table (the one on the parameter screen).
    How do I do this job? Do you have an example? Thanks.

    Here is an example table which shows how to upload a csv file from the frontend to a dynamic internal table.  You can of course modify this to update your database table.
    report zrich_0002.
    type-pools: slis.
    field-symbols: <dyn_table> type standard table,
                   <dyn_wa>,
                   <dyn_field>.
    data: it_fldcat type lvc_t_fcat,
          wa_it_fldcat type lvc_s_fcat.
    type-pools : abap.
    data: new_table type ref to data,
          new_line  type ref to data.
    data: xcel type table of alsmex_tabline with header line.
    selection-screen begin of block b1 with frame title text .
    parameters: p_file type  rlgrap-filename default 'c:Test.csv'.
    parameters: p_flds type i.
    selection-screen end of block b1.
    start-of-selection.
    * Add X number of fields to the dynamic itab cataelog
      do p_flds times.
        clear wa_it_fldcat.
        wa_it_fldcat-fieldname = sy-index.
        wa_it_fldcat-datatype = 'C'.
        wa_it_fldcat-inttype = 'C'.
        wa_it_fldcat-intlen = 10.
        append wa_it_fldcat to it_fldcat .
      enddo.
    * Create dynamic internal table and assign to FS
      call method cl_alv_table_create=>create_dynamic_table
                   exporting
                      it_fieldcatalog = it_fldcat
                   importing
                      ep_table        = new_table.
      assign new_table->* to <dyn_table>.
    * Create dynamic work area and assign to FS
      create data new_line like line of <dyn_table>.
      assign new_line->* to <dyn_wa>.
    * Upload the excel
      call function 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           exporting
                filename                = p_file
                i_begin_col             = '1'
                i_begin_row             = '1'
                i_end_col               = '200'
                i_end_row               = '5000'
           tables
                intern                  = xcel
           exceptions
                inconsistent_parameters = 1
                upload_ole              = 2
                others                  = 3.
    * Reformt to dynamic internal table
      loop at xcel.
        assign component xcel-col of structure <dyn_wa> to <dyn_field>.
        if sy-subrc = 0.
         <dyn_field> = xcel-value.
        endif.
        at end of row.
          append <dyn_wa> to <dyn_table>.
          clear <dyn_wa>.
        endat.
      endloop.
    * Write out data from table.
      loop at <dyn_table> into <dyn_wa>.
        do.
          assign component  sy-index  of structure <dyn_wa> to <dyn_field>.
          if sy-subrc <> 0.
            exit.
          endif.
          if sy-index = 1.
            write:/ <dyn_field>.
          else.
            write: <dyn_field>.
          endif.
        enddo.
      endloop.
    REgards,
    RIch Heilman

  • Read a csv file and read the fiscal yr in the 4th pos?

    Hello ABAP Experts,
    how to write a code for read a csv file and read the fiscal year in the 4th position.
    any suggestions or code highly appreciated.
    Thanks,
    BWer

    Hi Bwer,
    Declare table itab with the required fields...
    Use GUI UPLOAD to get the contents of the file (say abc.csv) in case if the file is on the presentation server...
    CALL FUNCTION 'GUI_DOWNLOAD'
      EXPORTING
        filename                        = 'c:\abc.csv'
       FILETYPE                        = 'ASC'
        WRITE_FIELD_SEPARATOR           = 'X'
      tables
        data_tab                        = itab
    EXCEPTIONS
       FILE_WRITE_ERROR                = 1
       NO_BATCH                        = 2
       OTHERS                          = 22
    IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
             WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Use OPEN DATASET in case if the file is on the application server..
    After that USE SPLIT command at comma to get the contents of the 4th field...
    Regards,
    Tanveer.
    <b>Please mark helpful answers</b>

Maybe you are looking for

  • Need help text_io function not working in 11i

    Hi, I am new in form devlopment Kindly help We create a customize form on 6i then deploye it on our Oracle apps Server with EBS. I want to create a form using form 6i which read a text file from client side and insert data into table i try text_io fu

  • Mac pro no longer connects to web via internet sharing

    My Mac Pro does not have an airport card so I have often used a PowerBook G4 or MacBook Pro to share a wireless internet connection via ethernet. A few months ago this method began failing but I was able to share a connection via firewire. Now I am n

  • Partition not showing up and cannot be mounted

    Hi, I have a Samsung 840 pro SSD installed on my macbook with Mountain Lion and earlier my computer froze then the computer was stuck on a white/grey screen for a while before rebooting finally. Since I reboot, my second partition is not showing up.

  • Not able to extarct all product data from t code crmd_order

    Hi experts, My Datasource is 0CRM_SALES_CONTR_I. The data in the source system (CRM) looks like (in T-code crmd_order) Contract (header level) Setvice Item Material A Material B The Extractor (standard one) is picking up the Contact (header level) it

  • Reading particular column from internal table

    Hi I am having one internal table with one row. I know the column number. I want to read that particular column value. What should I add with the below statement. Read table itab index 1. This will give me the whole row. I want only the particular co