Use servlet read .csv file by ie, but why two more " ?

Hi , friends
following is the key method in my servlet used to read .csv file to ie , but , after I read file to client side , I found that in the both sides of every line " was added, so the csv file can have only one column (many columns was merged into one column), that was not what i want, who guys can find some idea? thanks for your enthusiasm!
void serveRemoteFile(String sFileName, String sContentType, HttpServletRequest req, HttpServletResponse res, StringBuffer sbLog, Runtime rt){
FileInputStream in = null;
ServletOutputStream out = null;
byte bBuf[] = null;
int nLen;
if (isBlank(sContentType) || isBlank(sFileName))
return;
res.setStatus(res.SC_OK);
res.setContentType(sContentType);
res.setHeader("Content-Disposition", "inline;filename= temp.csv");
try {
in = new FileInputStream(sFileName);
out = res.getOutputStream();
bBuf = new byte[1024];
while ( (nLen = in.read(bBuf,0,1024)) != -1) {
out.write(bBuf, 0, nLen);
out.flush();
out.close();
}catch (FileNotFoundException e){
errorPage(req,res,sbLog);
}catch (IOException e){
errorPage(req,res,sbLog);

Excel uses a weird CSV file format. You can find more information about it and a Java library to read data from it here:
http://ostermiller.org/utils/ExcelCSV.html

Similar Messages

  • Labview not reading csv file properly.

    I'm trying to use the read spreadsheet file vi to read the attached csv.
    It does a few things incorrectly:
    1) the first cell of the resulting array has ÿþ appended to the front of it, seems like labview is grabbing some special character it shouldn't? (Notepad, notepad++ and excel can all open the file without issue and none show ÿþ or anything like it).
    2) It's mistreating the EOL characters as two line feeds, resulting in blank rows between each data row. (Looking at the csv in notepad++ confirms that the lines are ended with a CR and a LF (same as EOL).
    3) I'm reading it in as a string, because one of the columns is non-numeric, but then I strip that column out and attempt to convert the remaining string array into a numeric array (this fails). It also fails if I pull out an individual element of the array and attempt to convert it to a number (in both cases using fract/exp string to number), below I have an example of trying to read the 2nd column of the 1st row (index: 0,1) which is formatted as a string: "1.37238168716431" and converting that into a number gives 0 as you can see in the snapshot of the front panel.
    Solved!
    Go to Solution.
    Attachments:
    REPORT02.CSV ‏4 KB

    I guess the file is in unicode, that doesn't help me very much.
    I don't use the "Read from Text File Function" - at least not directly, although it is called in Read from Spreadsheet File. Where is it called, however, right-click doesn't show an option for Convert EOL, instead it's a terminal and it's default is F and as the terminal is disconnected, I'm assuming it's not converting EOL. So it's already "unchecked" in a sense.
    No idea how to convert the unicode string to ascii. (no functions show up when i search unicode or ascii in the manager). It displays it just fine in the text box, so I'm not sure why it's able to display it without issue but not read it as a number.
    Using Labview 8.6, by the way.

  • Drivers to read CSV file

    Hi,
    In one of my application, I need to read CSV file as a database and read the columns from the CSV file. Can I get any drivers which accepts CSV file as a datasource and I can query on that datasource?
    I know this will work in Windows.. But I am working on UNIX and need drivers to be installed on Unix.
    Does any one knows how to do it or if they can provide any links, That is of very helpful for me..
    Please help me in resolving this problem..

    Hi
    You need to use a JDBC driver of Type 3 to do your task.
    Swaraj

  • Read CSV file in background

    I use the open dataset for uploading CSV file in background but I get the values with "' and , characters.
    What am I doing wrong? Can I get an example ?
    Thanks,
    Rebeka

    Hi,
    If I remember correctly, when using OPEN DATASET, the file name can not have spaces in the name. Try renaming your file removing spaces and retry.
    For your code, you'll need to breakdown the CSV file by the delimiter.
    This code makes an archive file while reading the main file.
    OPEN DATASET filname IN TEXT MODE MESSAGE t_mesg.
    IF sy-subrc NE 0.
    MOVE 'X' TO t_error.
    MESSAGE e100(z0) WITH 'Error reading file:' t_mesg.
    EXIT.
    ENDIF.
    IF p_load = 'X'.
    OPEN DATASET archfilname FOR OUTPUT IN TEXT MODE MESSAGE t_mesg.
    IF sy-subrc NE 0.
    MOVE 'X' TO t_error.
    MESSAGE e100(z0) WITH 'Error opening achrive file:' t_mesg.
    EXIT.
    ENDIF.
    ENDIF.
    DO.
    READ DATASET filname INTO my_rec.
    IF sy-subrc NE 0.
    EXIT.
    ENDIF.
    IF p_load = 'X'.
    TRANSFER my_rec TO archfilname.
    ENDIF.
    SPLIT my_rec AT c_tab " Here my delimter was a tab change to ',' for comma
    INTO in_rec-id
    in_rec-fname
    in_rec-lname
    in_rec-addr
    in_rec-apt
    in_rec-city
    in_rec-state
    in_rec-zip
    in_rec-branch
    in_rec-phone1
    in_rec-phone2
    in_rec-phone3
    in_rec-phone3_ext
    in_rec-email
    in_rec-hear
    in_rec-prefcont
    in_rec-ownland
    in_rec-build
    in_rec-ownhome
    in_rec-get_promo
    in_rec-cmt1
    in_rec-subdate.
    APPEND in_rec TO it_input.
    ENDDO.
    CLOSE DATASET filname.
    IF p_load = 'X'.
    CLOSE DATASET archfilname.
    DELETE DATASET filname.
    ENDIF.
    ENDIF.
    Refer This:
    Upload CSV file from Application server
    How to Upload/Download '.CSV' file from application Server (AL11) in BI

  • Question about reading csv file into internal table

    Some one (thanks those nice guys!) in this forum have suggested me to use FM KCD_CSV_FILE_TO_INTERN_CONVERT to read csv file into internal table. However, it can be used to read a local file only.
    I would like to ask how can I read a CSV file into internal table from files in application server?
    I can't simply use SPLIT as there may be comma in the content. e.g.
    "abc","aaa,ab",10,"bbc"
    My expected output:
    abc
    aaa,ab
    10
    bbb
    Thanks again for your help.

    Hi Gundam,
    Try this code. I have made a custom parser to read the details in the record and split them accordingly. I have also tested them with your provided test cases and it work fine.
    OPEN DATASET dsn FOR input IN TEXT MODE ENCODING DEFAULT.
    DO.
    READ DATASET dsn INTO record.
      PERFORM parser USING record.
    ENDDO.
    *DATA str(32) VALUE '"abc",10,"aaa,ab","bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"bbc"'.
    *DATA str(32) VALUE '"a,bc","aaaab",10,"bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"b,bc"'.
    *DATA str(32) VALUE '"abc","aaaab",10,"bbc"'.
    FORM parser USING str.
    DATA field(12).
    DATA field1(12).
    DATA field2(12).
    DATA field3(12).
    DATA field4(12).
    DATA cnt TYPE i.
    DATA len TYPE i.
    DATA temp TYPE i.
    DATA start TYPE i.
    DATA quote TYPE i.
    DATA rec_cnt TYPE i.
    len = strlen( str ).
    cnt = 0.
    temp = 0.
    rec_cnt = 0.
    DO.
    *  Start at the beginning
      IF start EQ 0.
        "string just ENDED start new one.
        start = 1.
        quote = 0.
        CLEAR field.
      ENDIF.
      IF str+cnt(1) EQ '"'.  "Check for qoutes
        "CHECK IF quotes is already set
        IF quote = 1.
          "Already quotes set
          "Start new field
          start = 0.
          quote = 0.
          CONCATENATE field '"' INTO field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            CONDENSE field.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
    *      WRITE field.
        ELSE.
          "This is the start of quotes
          quote = 1.
        ENDIF.
      ENDIF.
      IF str+cnt(1) EQ ','. "Check end of field
        IF quote EQ 0. "This is not inside quote end of field
          start = 0.
          quote = 0.
          CONDENSE field.
    *      WRITE field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
        ENDIF.
      ENDIF.
      CONCATENATE field str+cnt(1) INTO field.
      cnt = cnt + 1.
      IF cnt GE len.
        EXIT.
      ENDIF.
    ENDDO.
    WRITE: field1, field2, field3, field4.
    ENDFORM.
    Regards,
    Wenceslaus.

  • How to synchronize if one servlet read a file and anothe servlet update the

    How to synchronize if one servlet read a file and anothe servlet update the file at a time?

    Create a class that holds the reference to the file and do the whole file manipulation just in that class. than synchronize the read and write methodes. A reference to this file handler class can be stored to the servlet context in one servlet and read out from the servlet context in the other servlet.

  • What do you use to read .stn files on mavericks?  Genuine fractals doesn't work any longer?

    What do you use to read .stn files on mavericks?  Genuine fractals doesn't work any longer?  I get an error message that adobe photo shop 5 cant open it because the modules? arent right.  I assume that it is because I now have an intel machine. I know other people have used this file format and so how do you open your photo files?

    When I try to open in Adobe Reader is says, "Adobe Reader could not open 'Blank-AF 77-LOE.xfdl' because it is either not a supported file type or because the file has been damaged (for example, it was sent as an email attachment and wasn't correctly decoded)."  Now what.  I didn't email it.  I transferred from an external hard drive.  The file opens just fine on a Windows PC.

  • Problem in reading csv file in servlet

    Hi everyone,
    I m getting an ClassCastExeption while importing a .csv file to databse...
    It works fine in case of .xls...
    I m using jxl from Jakarta...and the code is as follows...
    Thanks in advance..
    FileItemFactory factory1 = new DiskFileItemFactory();
    ServletFileUpload upload1 = new ServletFileUpload(factory1);
    items1 = upload1.parseRequest(request);
    Iterator iter = items1.iterator();
    while (iter.hasNext()) {
                                  FileItem item = (FileItem) iter.next();
                                  InputStream uploadedStream = item.getInputStream();
    importCSV(uploadedStream);
    and in the import method.....
    Workbook w;
    w = Workbook.getWorkbook((FileInputStream)is);
    In the above line i m getting ClassCastException...
    Please help..

    Thanks gimbal2 ..
    w = Workbook.getWorkbook(is); throws the FileNotFoundException
    I solved it by tokenizing the inputstream on ','(comma) and then getting the tokens for each column in the database....

  • Reading csv file using file adapter

    Hi,
    I am working on SOA 11g. I am reading a csv file using a file adapter. Below are the file contents, and the xsd which gets generated by the Jdev.
    .csv file:
    empid,empname,empsal
    100,Ram,20000
    101,Shyam,25000
    xsd generated by the Jdev:
    <?xml version="1.0" encoding="UTF-8" ?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://TargetNamespace.com/EmpRead" targetNamespace="http://TargetNamespace.com/EmpRead" elementFormDefault="qualified" attributeFormDefault="unqualified"
    nxsd:version="NXSD"
    nxsd:stream="chars"
    nxsd:encoding="ASCII"
    nxsd:hasHeader="true"
    nxsd:headerLines="1"
    nxsd:headerLinesTerminatedBy="${eol}">
    <xsd:element name="Root-Element">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="Child-Element" minOccurs="1" maxOccurs="unbounded">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="empid" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," nxsd:quotedBy="&quot;" />
    <xsd:element name="empname" minOccurs="1" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," nxsd:quotedBy="&quot;" />
    <xsd:element name="empsal" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" nxsd:quotedBy="&quot;" />
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>
    For empname i have added minoccurs=1. Now when i remove the empname column, the csv file still gets read from the server, without giving any error.
    Now, i created the following xml file, and read it through the file adapter:
    <?xml version="1.0" encoding="UTF-8" ?>
    <Root-Element xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://TargetNamespace.com/EmpRead xsd/EmpXML.xsd" xmlns="http://TargetNamespace.com/EmpRead">
    <Child-Element>
    <empid>100</empid>
    <empname></empname>
    <empsal>20000</empsal>
    </Child-Element>
    <Child-Element>
    <empid>101</empid>
    <empname>Shyam</empname>
    <empsal>25000</empsal>
    </Child-Element>
    </Root-Element>
    When i removed the value of empname, it throws the proper error for the above xml.
    Please tell me why the behaviour of file adapter is different for the csv file and the xml file for the above case.
    Thanks

    Hi,
    I am working on SOA 11g. I am reading a csv file using a file adapter. Below are the file contents, and the xsd which gets generated by the Jdev.
    .csv file:
    empid,empname,empsal
    100,Ram,20000
    101,Shyam,25000
    xsd generated by the Jdev:
    <?xml version="1.0" encoding="UTF-8" ?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://TargetNamespace.com/EmpRead" targetNamespace="http://TargetNamespace.com/EmpRead" elementFormDefault="qualified" attributeFormDefault="unqualified"
    nxsd:version="NXSD"
    nxsd:stream="chars"
    nxsd:encoding="ASCII"
    nxsd:hasHeader="true"
    nxsd:headerLines="1"
    nxsd:headerLinesTerminatedBy="${eol}">
    <xsd:element name="Root-Element">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="Child-Element" minOccurs="1" maxOccurs="unbounded">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="empid" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," nxsd:quotedBy="&quot;" />
    <xsd:element name="empname" minOccurs="1" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," nxsd:quotedBy="&quot;" />
    <xsd:element name="empsal" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" nxsd:quotedBy="&quot;" />
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>
    For empname i have added minoccurs=1. Now when i remove the empname column, the csv file still gets read from the server, without giving any error.
    Now, i created the following xml file, and read it through the file adapter:
    <?xml version="1.0" encoding="UTF-8" ?>
    <Root-Element xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://TargetNamespace.com/EmpRead xsd/EmpXML.xsd" xmlns="http://TargetNamespace.com/EmpRead">
    <Child-Element>
    <empid>100</empid>
    <empname></empname>
    <empsal>20000</empsal>
    </Child-Element>
    <Child-Element>
    <empid>101</empid>
    <empname>Shyam</empname>
    <empsal>25000</empsal>
    </Child-Element>
    </Root-Element>
    When i removed the value of empname, it throws the proper error for the above xml.
    Please tell me why the behaviour of file adapter is different for the csv file and the xml file for the above case.
    Thanks

  • SQL server 2014 and VS 2013 - Dataflow task, read CSV file and insert data to SQL table

    Hello everyone,
    I was assigned a work item wherein, I've a dataflow task on For Each Loop container at control flow of SSIS package. This For Each Loop container reads the CSV files from the specified location one by one, and populates a variable with current
    file name. Note, the tables where I would like to push the data from each CSV file are also having the same names as CSV file names.
    On the dataflow task, I've Flat File component as a source, this component uses the above variable to read the data of a particular file. Now, here my question comes, how can I move the data to destination, SQL table, using the same variable name?
    I've tried to setup the OLE DB destination component dynamically but it executes well only for first time. It does not change the mappings as per the columns of the second CSV file. There're around 50 CSV files, each has different set off columns
    in it. These files needs to be migrated to SQL tables using the optimum way.
    Does anybody know which is the best way to setup the Dataflow task for this requirement?
    Also, I cannot use Bulk insert task here as we would like to keep a log of corrupted rows.
    Any help would be much appreciated. It's very urgent.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    The standard Data Flow Task supports only static metadata defined during design time. I would recommend you check the commercial COZYROC
    Data Flow Task Plus. It is an extension of the standard Data Flow Task and it supports dynamic metadata at runtime. You can process all your input CSV files using a single Data Flow Task
    Plus. No programming skills are required.
    SSIS Tasks Components Scripts Services | http://www.cozyroc.com/

  • Reading csv file how to get the Column name

    Hi,
    I am trying to read a csv file and then save the data to Oracle.
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection c = DriverManager.getConnection("jdbc:odbc:;Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=.;Extensions=csv,txn");
    Statement stmt = c.createStatement();
    ResultSet rs = stmt.executeQuery("select * from filename.csv");
    while(rs.next())
       System.out.println(rs.getString("Num"));
    My csv file looks like this:
    "CHAM-23","COMPANY NAME","Test","12",20031213,15,16
    Number,Environ,Envel,Date,Time
    "1","2",3,"4",5
    "6","7",8,"9",9
    Now is there anyway using the above code I start processing the file from the second row that holds the names of the columns and skip the first row. And also can I get the name of the column using ResultSet something like:
    if columnName.equals("Number")
    Because I may have a csv file that could have more columns:
    "CHAM-24","COMPANY NAME","Test","12",20031213,16,76
    Number,Environ,Envel,Date,Time,Total,Count
    "1","2","3","4","5",3,9
    "6","7","8","9",9",,2
    So I want to get the column name and then based on that column I do some other processing.
    Once I read the value of each row I want to save the data to an Oracle Table. How do I connect to Oracle from my Application. As the database is on the server. Any help is really appreciated. Thanks

    The only thing I could think of (and this is a cluj) would be to attempt to parse the first element of each row as a number. If it fails, you do not have a column name row. You are counting on the fact that you will never have a column name that is a number in the first position.
    However, I agree that not always placing the headers in the same location is asking for trouble. If you have control over the file, format it how you want. If someone else has control over the file, find out why they are doing it that way. Maybe there is a "magic" number in the first row telling you where to jump.
    Also, I would not use ODBC to simply parse a CSV file. If the file is formatted identically to Microsoft's format (headers in first row, all subsequent rows have same number of columns), then it's fine to take a shortcut and not write your own parser. But if the file is not adhering to that format, don't both using the M$ ODBC driver.
    - Saish
    "My karma ran over your dogma." - Anon

  • Renaming extracted Single windows IDcs5 documents using a windows CSV-file

    I'm not a scripter and I'm in need of a script that will rename extracted single pages using a column from a windows CSV-file to name each InDesign file accordingly. I have search the forums but all I can find is a Renaming extracted single pdf pages as in this post http://forums.adobe.com/message/4633533#4633533#4633533 I've been told that I could adjust such scripting but I don't know how. I will like to acomplish the same but Exporting single InDesing pages. Does anyone has a script that can be share on how to do such thing? Either through ID or using Bridge?

    The link highlighted in your post splits a large PDF into single page PDFs based on a csv/txt file within Acrobat - it isn't an InDesign Script.
    There is an indesign script you can try and it is written by Loic Aigon: go to http://www.loicaigon.com/en/pdf-exports-properly-named/ read the article and see if it is appropriate for you.
    There is another indesign script you can try but it is in german. See the adobe forum post http://forums.adobe.com/thread/1014766 for more information.
    Hope this helps
    Colin

  • Addressbook cant read .csv-files exported from Numbers (Solved)

    Hello,
    I can hardly believe this myself, but please try. Numbers separates fields by semicolons while exporting my address data to a .csv file, which seems completely feasible to me. When trying to import this into Addressbook, the applications moans that it cant import this file because it is not valid.
    Do programmers from separate groups at Apple ever test whether their Apps work together? I just expect them to do so and I cant imagine a case more simple then that. This is really poor. The reason why I put this complaint in the Numbers discussion is, that there is no forum for Adressbook (whereas iCal got his own - why?).
    And the reason why I pick up this topic again in a new thread is that the other thread I found related to transferring data from Numbers to Addressbook took a weird twist when they resorted to writing a complicated and elaborate Apple Script for this basic task. The Script provided there messed up with valuable Address data in the first version, so I am not going to try it here.
    I just took the search and replace functionality of my text editor to solve it for me. But I still believe that this should work without headaches right out of the box. So Addressbook programmers please hurry up, go, fix it. It is just too distressing. Imagine, I did this in public with Windows guys around. I dont want to experience such again.
    Yours, Christian Völker

    Hello Yvan,
    yes, I agree that Numbers is doing fine and by your explanation I even know why it does the way it does, but Addressbook remains broken.
    Regarding your script I did not even bother to read it and I just dont trust it because I cant sue you in case something goes wrong. I prefer to do my own faults and in this case it was an even better and faster solution for me to use my text editor with search and replace.
    To make the story complete, after the data was accepted, Addressbook wasnt able to recognize the field labeled "Email" as an Email Address and Street as a Street Name. After telling Addressbook about the meaning of the fields, it consistently crashed while importing. No, this is no good. A Script wont heal these flaws.

  • Read csv file to graph?

    Hi! there.
    Can anyone help me?
    I made a csv file from DAQ mx.
    This file Format is
     date(ex: 2007-09-16 11:40:32),data1,data2.... date(ex: 2007-09-16 11:40:32),data1,data2....
    I want to draw this file to X,Y Graph or Chart.
    But I can't convert this into 2D array.  When I covert this into Date by using read from spreadsheet file .vi, Date is converted like this 2007.000
    How can I do that?
    Could anyone infrom me about this solutions?
    regrads.
    Kevin.

    smercurio_fc wrote:
    Also, you should not be basing the location of the path to read from the location of the VI. You should use a file path control because:
        (a) You cannot assume the file to read is in the same location as the VI
        (b) If this VI is build into an application the path will be different.
    Yes, if you built an application, the relative path will be different. Also, if the application is properly installed in the "program files" folder, there will be potential problems in windows Vista if you want to write there.
    Still, in a simple development environment it is often convenient to keep data next to the VI in order to test code, so relative paths make sense. Here's the (near) equivalent code to whatever you constructed with the while loop.
    It is very important to never (!!!) use path-to-string and string-to-path when manipulating paths, because it will immediately break the code on a different platform (windows, mac, linux). You should use "strip path" and "build path" exclusively! Paths should never exist as strings, just a paths.
    Small differences:
    In this particular case you need to ensure that the "sub folder" control is not an empty string or you get a <not a path> as output.Your code would just ignore the subfolder.
    Your code also requires the sub folder control to end in a "\", while my alternative only needs the name. If you don't end it in a "\", the subfolder will be merged with the filename.
    Exception:
    The only place where path-to-string and string-to-path should be used os for OS specific tasks, such as forming a commandline for "system exec".
    Message Edited by altenbach on 09-17-2007 09:04 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    changePath.png ‏4 KB

  • Reading CSV files

    Hi,
    I want to read in a CSV file, which I think I can do using the filereader and tokenizer classes. The complications are:
    1) The first line of the csv file is a load of headings - is there an easy way to ignore the first line?
    2) I am only interested in one column of the file, it's a column of exam marks ranging from 0-99. I think I can set up a loop to tokenise until I get to the specified marks token (ie the 10th column along), but this is complicated by no. 3...
    3) I need to be able to deal with nulls, at the moment as soon as it reaches a null cell the application stops and throws an exception. There is a complete line with blank cells part way through the file. I need to be able to get past this and continue reading the rest of the file. I can't quite get my head round how to catch the exception and continue the file reading until the true end of file.
    Can anyone help?!!
    Thanks
    Guy

    String[] tokens = line.split(",");It's not quite that simple. Consider the following
    line of CSV data:
    "This is one cell, with a comma in the text","This one has ""double quotes"" around part of it",,The previous cell was emptyIf you can parse that as four cells, I think you can parse
    anything Excel produces.So are commas and double-quotes the only metacharacters? If so, you could tokenize on both characters, and use the StringTokenizer constructor that returns the separators as tokens. Then write a simple parser. (Relatively simple, that is.)
    Are the only rules:
    * if the first part of a field is a double quote, then it's a quoted field
    * if two adjacent quotes appear in a quoted field, then they represent a single double quote char
    * if a comma appears in a quoted field, it's a comma in the field, not a field separator
    * a single double quote char ends a quoted field
    * commas are field separators, unless quoted as above
    If so, then you have the basis for your simple parser.

Maybe you are looking for