Counting lines in a text/csv file.

Can you advice me, how I can get the total number of lines in a text/csv file please using java code.
I will get the text/csv file content in a string variable not as a file.
EX: string var = "123\n234\n123\n3456\nsdfsd\n" here \n is in the new line.
for this I have to get the total lines as 5.
Please advice.
Thanks.

Kayaman wrote:
user12679330 wrote:
Oh ok.. ignore the last one, Any idea on this please.Count the newline characters like morgarl told you. You're being very difficult to help for such a simple problem.Thats because nobody has posted the code yet dude ;)
I'm pretty sure that once the OP finally takes the hint and writes the two lines of code needed to solve this question, he'll come up with something that sometimes gives the correct answer and other times gives a count that is one-off.

Similar Messages

  • Counting number of lines in a text/csv file using xquery/xslt.

    Hi,
    I have a CSV file, I need to count the total number of lines in that file. This I have to use in OSB. My requirement is I have to count total no of lines in the $body file (CSV/flat file) and subtract header and footer lines from it.
    EX:
    header,1, @total_no_of_detal@
    detail,1
    detail,2
    detail,3
    detail,n
    footer, 1
    If suppose i have 10 detail lines, and I am getting body of the file as shown above,
    then in the final file, I have to change the body of the file as:
    header,1, *10*
    detail,1
    detail,2
    detail,3
    detail,n
    footer, 1
    Please advice how to do this in OSB.
    Edited by: user12679330 on Aug 2, 2011 2:34 AM

    I would suggest you to use MFL to convert the file into XML when you read it and then you can count the detail elements within the XML and update the values in Header element before writing the file again in flat file format again using MFL.
    You can read the documentation of Format Builder utility which is used to define MFL here:
    http://download.oracle.com/docs/cd/E21764_01/doc.1111/e15866/part_fb.htm
    Alternatively, another approach is the read the whole flat file as a text string and then doing needed manipulations to get what you need. If you want to do it like this then you will need to consider following things:
    1. Is the file in DOS format or Unix format? (To know the End of line character in use)
    2. Does the file contain an end of line at the end of file? (whether there is an end of line char at the end of data of footer record)
    Once you know above, you can try an xquery like following:
    let $intermediateXML :=
    <Root>
    for $a in (tokenize($in, 'EOL'))
    return
    if (string-length($a) > 0)
    then
    <record>{$a}</record>
    else ()
    </Root>
    let $count := count($intermediateXML/record)
    let $outXML :=
    <OutRoot>
    for $i in 1 to $count
    return
    if (data($i) = 1)
    then
    <Record>{concat($out, $intermediateXML/record[$i]/text(), ',Count=',($count)-2)}</Record>
    else
    <Record>{concat($out, $intermediateXML/record[$i]/text())}</Record>
    </OutRoot>
    return op:concatenate($outXML/Record/text(), 'EOL')In Above XQuery, replace EOL with the end of line character which for unix wound be "&#xA ;" (remove quotes and space between the A and ; ) and for DOS would be "&#xD ;&#xA ;" (remove quotes and space between the A and ; and D ; )

  • How to save and open Text/csv file over APS

    Hi,
    I want to save(and open later) data from database block in text/csv file format over Application server, I did a lot of search here on Forms but didn't found a reasonable solution, Some one plz help me.
    Thanks and Regards, Khawar.

    As long as you are using the Report Generation Toolkit, there is no need to use ActiveX for the purpose of saving the Excel file -- Save Report will do that for you.  You can then export (as text) all of the Excel data and use the Write Spreadsheet File to make the .CSV file.  Here is how I would rewrite the final bits of your code --
    I cleaned up the wires around Excel Easy Table a bit.  You'll see I made a VI Icon for Replace Filename, which does (I think) a nicer job of self-documenting than showing its Label.  To make the filename, add the correct extension (.xlsx for Excel, .csv for CSV), use Build Path to combine Folder and Filename, and wire to Save Report.
    The next two functions add the Extract Everything and Write to Spreadsheet, using a Comma as a separator, which makes a CSV copy for you.
    Bob Schor

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • How to filter a get-adcomputer command using a text/csv file and output to a CSV

    OK, so I'm in the process of trying to get a Windows 7 refresh completed for a large client... I had the bright idea to use AD to find what computers are listed as XP, and determine our progress using that.  Well, I run get-adcomputer, and it returns
    the results... a whole lot of results that really aren't active anymore.  I obtained a list of systems that have been disposed of, holding for disposal, etc... now my problem is I can't quite figure out how to get that list to be used to exclude those
    computers from the results... I execute my script, it runs, but it seems to get stuck in a loop... I'm sure I'm missing something VERY basic, but I can't figure it out.  This is the first time I've ever attempted to use the get-content commandlet, so
    again, I'm sure I'm doing something stupid. 
    Any help will be greatly appreciated!
    import-module ActiveDirectory
    $exclude = get-content c:\scripts\excludes.csv
    ForEach ($entry in $Exclude)
    $Excl = "*$entry*"
    get-adcomputer -filter {Name -notlike $excl} | select Name,operatingsystem
    } Export-CSV c:\scripts\xpfil.csv

    OK, this one with a little tweaking on my excludes.txt worked... I'd really like to know what the
    ?{$a -notcontains $_.name}
    is doing... Step through it if you can... that way next time I'm not beating my head against this same rather solid wall!
    ? is an alias for Where-Object. I avoid using aliases in posts, as they usually confuse people.
    The concept is that you're reading your exclude list into a variable and then verifying that the variable doesn't contain the computer name that you're currently processing in the pipeline. In my case, I'm running each object through a ForEach-Object loop.
    Make sense?
    Don't retire TechNet! -
    (Don't give up yet - 12,830+ strong and growing)

  • Data loading from csv  files

    Hello,
    I designed a quite simple characteristic having language dependent texts, two numeric time dependent attributes, and a compounding characteristic.
    I then built a set of infosource, data source and  infopackage for texts,.and a similar one for attributes.  Data sources reside on csv flat files, stored on local machine [ folder c:\tmp ].
    The file with texts stores the texts for two languages, each row containing the language key field.
    When I run the infopackage in preview mode, everything looks ok.
    When I run it in schedule mode, only the english texts are stored into the characteristic, the other language texts are left blank.
    The line structure of the csv file that contains the attributes, is as follows:
    1;1;99991231;10000101;0.99;0.00
    The first two fields are
    -compound;
    -key_value;
    -ValidTo;
    -ValiFrom;
    -Attribute1
    -Attribute2
    When I run the infoPackage in preview mode, it also looks ok.
    When schedule it, the ValidTo and ValidFrom are left blank.
    In both situations the transfer rules are based on direct fields-infoObject mapping.
    Also no error ot warning message is displayed.
    Is there anyone to let me know what I miss?
    Any other suggestion is very welcome.
    Regards!

    As u mentioed in your post should bee excel save as csv why need semicolon just check out and analyze
    1;1;99991231;10000101;0.99;0.00

  • Performance issue with big CSV files as data source

    Hi,
    We are creating crystal reports for a large banking corporation with CSV files as data source. For some reports, we need join 2 csv files. The problem we met now is that when the 2 csv files are very large (both >200M), the performance is very bad and it takes an hour or so to refresh the data in Crystal Reports designer. The same case for either CR 11.5 or CR 2008.
    And my question is, is there any way to improve performance in such situations? For example, can we create index on the csv files? If you have ever created reports connecting to CSV, your suggestions will be highly appreciated.
    Thanks,
    Ray

    Certainly a reasonable concern...
    The question at this point is, How are the reports going to be used and deployed once they are in production?
    I'd look at it from that direction.
    For example... They may be able to dump the data directly to another database on a separate server that would insulate the main enterprise server. This would allow the main server to run the necessary queries during off peak hours and would isolate any reporting activity to a "reporting database".
    This would also keep the data secure and encrypted (it would continue to enjoy the security provided by an RDBMS). Text & csv files can be copied, emailed, altered & deleted by anyone who sees them. Placing them in encrypted .zip folders prevents them from being read by external applications.
    <Hope you liked the sales pitch I wrote for you to give to the client... =^)
    If all else fails and you're stuck using the csv files, at least see if they can get it all on one file. Joining the 2 files is killing you performance wise... More so than using 1 massive file.
    Jason

  • Parsing Excel's CSV file in PL/SQL

    Someone may need this...
    This procedure parses a line of Excel's CSV file into a PL/SQL table's elements.
    FUNCTION func_split_csv (p_string IN VARCHAR2)
    RETURN gt_tbltyp_strings
    IS
    * NAME: func_split_csv
    * PURPOSE: Split the passed Excel's CSV strings into tokens.
    * Returns PL/SQL table of strings.
    * REVISIONS:
    * Author Date Description
    * S.Stadnik 12/12/07 Initial
    * PARAMETERS
    * INPUT : p_string - String to be tokenized
    * OUTPUT : None.
    * RETURNS: PL/SQL table of strings.
    TYPE gt_tbltyp_strings IS TABLE OF VARCHAR2(32000) INDEX BY BINARY_INTEGER;
    -- Local constants
    c_str_proc_nme CONSTANT VARCHAR2(100) := gc_str_package_nme||'.func_split_csv';
    -- Local Variables
    v_int_location PLS_INTEGER;
    v_tab_tokens gt_tbltyp_strings;
    v_str_token VARCHAR2(32000);
    v_str_char VARCHAR2(1);
    v_int_table_pos PLS_INTEGER := 1;
    v_int_idx PLS_INTEGER := 1;
    b_bln_open_quote BOOLEAN := FALSE; -- Flag, indicating that double-quote is opened
    BEGIN
    v_int_location := 0;
    -- Is the string is empty, return an empty table
    IF p_string IS NULL THEN
    RETURN v_tab_tokens;
    END IF;
    -- If the string does not contain delimiters, return the whole string
    IF InStr(p_string, ',', 1) = 0 THEN
    v_tab_tokens(v_int_table_pos) := p_string;
    RETURN v_tab_tokens;
    END IF;
    -- Loop thru all the characters
    WHILE v_int_idx <= Length (p_string) LOOP
    v_str_char := SubStr(p_string, v_int_idx, 1);
    CASE v_str_char
    -- Double quote encoutered (special case)
    WHEN '"' THEN
    -- If no double-quote was opened, this is an opening quote
    IF b_bln_open_quote = FALSE THEN
    b_bln_open_quote := TRUE;
    -- '""' translates to '"'
    ELSIF SubStr(p_string, v_int_idx + 1, 1) = '"' THEN
    v_str_token := v_str_token || '"';
    v_int_idx := v_int_idx + 1;
    -- If double-quote was opened, this is a closing quote
    ELSIF b_bln_open_quote = TRUE THEN
    b_bln_open_quote := FALSE;
    END IF;
    -- Coma encoutered (special case)
    WHEN ',' THEN
    -- If double-quote was opened, this is part of the string
    IF b_bln_open_quote = TRUE THEN
    v_str_token := v_str_token || ',';
    -- If double-quote was not opened, this is a delimiter, save the string
    ELSE
    v_tab_tokens(v_int_table_pos) := v_str_token;
    v_str_token := '';
    v_int_table_pos := v_int_table_pos + 1;
    END IF;
    -- Any other character is passed into the string
    ELSE
    v_str_token := v_str_token || v_str_char;
    END CASE;
    v_int_idx := v_int_idx + 1;
    END LOOP;
    -- If final token lenght is non-zero after we processed all the characters,
    -- OR if the last character of the source string is ','
    -- Add the final token into the table
    IF Length(v_str_token) != 0
    OR SubStr(p_string, -1, 1) = ',' THEN
    v_tab_tokens(v_int_table_pos) := v_str_token;
    END IF;
    RETURN v_tab_tokens;
    EXCEPTION
    WHEN OTHERS THEN
    -- Put your favourite error handler in here
    END func_split_csv;

    Yes, I agree.
    That was written for a particular case:
    - When the CSV file is an Excel's file,
    - When it is impossible to put the file onto the DB server, so we couldn't use external tables, and the whole contents had to be passed as a CLOB parameter.

  • Appending multiple *.csv files while retaining the original file name in the first column

    Hi guys it's been awhile.
    I'm trying to append multiple *.csv files while retaining the original file name in the first column, the actual data set is about 40 files.
    file a.csv contains:
    1, line one in a.csv
    2, line two in a.csv
    file b.csv contains:
    1, line one in b.csv
    2, line two in b.csv
    output.csv result is this:
    I would like this:
    a.csv, 1, line one in a.csv
    a.csv, 2, line two in a.csv
    b.csv, 1, line one in b.csv
    b.csv, 2, line two in b.csv
    Any suggestions to speed up my hobbling attempts would be aprieciated
    Thanks,
    -SS
    Solved!
    Go to Solution.

    What you could do is given in the attachment.
    Started with 2 files :
    a.csv
    copy of a.csv
    Both with data :
    1;1.123
    2;2.234
    3;3.345
    Output :
    a.csv;1;1.123
    a.csv;2;2.234
    a.csv;3;3.345
    Copy of a.csv;1;1.123
    Copy of a.csv;2;2.234
    Copy of a.csv;3;3.345
    If you have more questions, just shoot
    Kind regards,
    - Bjorn -
    Have fun using LabVIEW... and if you like my answer, please pay me back in Kudo's
    LabVIEW 5.1 - LabVIEW 2012
    Attachments:
    AppendingCSV.JPG ‏73 KB

  • Huge CSV File

    I'm trying to read in the first double value of every line in a gigantic CSV file in an efficient manner. Using "readLine()" creates a gigantic string that is disgaurded right after the first value is parsed out. This seems incredibly inefficient, and it's taking about 30 minutes just to complete an analysis of this file. Is there any way to just grab the values out?
    I've tried just reading one byte at a time, grabbing the first value until I reach a comma, and then reading in a byte until I reach a end of line. But this has it's obvious disadvantages of reading byte by byte, and an inherant slowness to it.
    Any solutions to this? Anything in NIO?
    -Jason Thomas.

    this:
    http://ostermiller.org/utils/CSVLexer.html
    Works nicely.

  • Working with CSV files -Powershell

    Hello all,
                 Here is a senario where am struck for past 2 days and i thought some one here can help.
    I am trying to get unique lines out of a csv file based on one particular column. Lets take a rough example , below is a sample csv i have pasted.. where i need to get only one last row from each category
    Problem
    Severity
    Category
    1
    Warning
    Event
    2
    critical
    Alert
    3
    Warning
    Alert
    4
    Warning
    Event
    5
    Warning
    Alert
    So the end result should have the complete row of problem 4&5 as that is the last appearing of unique data. Just to add an information that my csvhas 25 columns and as big as 11MB of data. Any thoughts would be greatly appreciated.
    S.Arun Prasath HP ARDE TEAM

    If I understand your scenario, this does the job...
    $testdata=import-csv 'c:\temp\test.csv'
    $categories=$testdata | Select Category -Unique
    foreach($cat in $categories){
    $testdata | Where-Object{$_.Category -eq $cat.Category} | Select -Last 1
    Result:
    Problem                                                  
    Severity                                                
    Category                                               
    4                                                        
    Warning                                                 
    Event                                                  
    5                                                        
    Warning                                                 
    Alert  
    Just a humble SysAdmin

  • Download csv file

    I use this line to download a csv file.
    <CFLOCATION URL="c:\file.csv" Addtoken="No">
    It works as intended in the production environment. The save
    or open box is showen and if open is clicked Excel opens the csv
    file.
    But when this is run locally(development) no box is shown and
    the file opens up in the browser like:
    EmployeeId;Name;Firstname;Lastname;Infix "876478";"Albert
    Vergeer";"Albert";"Vergeer";"" "7";"Ingrid Mol";"Ingrid";"Mol";""
    "4";"Stephanie Harmsen-Kuhl";"Stephanie";"Harmsen-Kuhl";""
    "6";"Susanne Kimkes";"Susanne";"Kimkes";"" "8";"Taco
    Morelisse";"Taco";"Morelisse";""
    Why is that? Is it because I'm running the build in webserver
    locally and ms iis in production?
    Can't the build in web server not handle this?
    How can I make it work on the locally?

    Not clear how it can work at all. CFLOCATION accepts URL
    (virtual path). Did you mean CFCONTENT?
    In general, how client's browser treats the content is
    defined by the "Content-Disposition" HTTP header that you can add
    with CFHEADER. By default, it is "inline" - should be displayed in
    the browser, rather than saved.

  • Dashboard for csv file monitoring

    Can someone explain to me how to add a widget to a dashboard for only alerts generated by with a specific rule or for text / csv file monitong?

    Here a post about custom alert views which should help you setup the view you want.
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/3c391854-ec0c-4114-8e63-b7511cb79913/scom-2012-custom-alert-view-or-condition?forum=operationsmanagergeneral#90577c8c-a80c-40cf-81b8-bce3b1a7ae9a
    Cheers,
    Martin
    Blog:
    http://sustaslog.wordpress.com 
    LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Count lines in text file

    Folks:
    What's the best way to count the number of lines in a text file?
    thanks,
    Kevin

    You can also skip the open/create, you'll still get a file dialog. I guess EOL conversion is also not needed.
    So the most minumalistic code is as follows.
    Has anyone done any benchmarks? Somehow I have the feeling that temporarily creating that big array of strings might not be the most efficient compared to reading all as one string and counting linefeeds. Who knows...? How big are your files?
    Message Edited by altenbach on 09-29-2006 04:03 PM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    CountLines.png ‏1 KB

  • CSV file with text qualifiers around each field causing error on Import

    Hi
    I have a csv file which I am trying to import - a one line extract is shown below. It is delimited by semi colon and each field has a text qualifier around it.
    XXX Drinks Ltd;"BR01";"1";"001.2008";"2008";"Distribution";"-186";"-186";"-186"
    When importing i get the following issue
    1) BPC doesn't seem to handle the text qualifier for the fields. For example the "BR01" field above requires me to put a conversion as follows ""BR01"" i.e. I have to double the quotes because BPC adds them
    2) Even after the required conversion, BPC does not like the double quotes around the amounts, even though when validating the transform I get no error message, when running the import package I get the following message
    Record Count: 1
    Accept Count: 1
    Reject Count: 0
    Skip Count  
    The number of failing rows exceeds the maximum specified. (Microsoft Data Transformation Services (DTS) Data Pump (8004202c): TransformCopy 'DTSTransformation__9' conversion error:  General conversion failure on column pair 1 (source column 'SIGNEDDATA' (DBTYPE_STR), destination column 'SIGNEDDATA' (DBTYPE_NUMERIC)).)
    Does this my source file can't have double quotes as a text qualifier?
    thanks in advance
    Scott Farrington

    James, thanks for your reply
    does that mean that BPC can't deal with the double quotes? I understand about removing them and using a comma for a delimiter, but this is the file format I have been given.
    What I really need to know is, given this format, using a transformation and/or mapping function, can I import the data the way it is?
    And I still need an answer to my second point, about the error message received on running the import package,
    thanks
    Scott

Maybe you are looking for

  • ShipMent and Delivery Extractor

    Can somebody explain the differences between shipment extractors and Delivery Extractors (SD). when exactly we use shipment and when 2lis_12_vchdr or itm. If anybody has document on shipment please mail it <b>[email protected]</b> Regards Kunal

  • Problem with Function Key in multiple JTextArea's

    Hi all, I have an unusual problem that I'm hoping someone has run into before. I'm working on a chatroom with multiple JTextArea's. I'm filter incoming keystrokes to run the appropriate method. I want to use function keys to perform various functions

  • I can not get itunes to open on Windows 7...

    I just got a new iPhone and wanted to back up my old iPhone to my computer one more time before switching over...Two hours later I still can not get iTunes to open.  I have two different acounts on my computer, iTunes will load on the secondary accou

  • New Org structure

    Hi all SRM gurus: Once again i have a question, hope you can help me: In PPOMA_BBP we've only one Purchasing org and Purchasing group assing to a central purchasing unit, all companys should use this definitions. When creating the SC system is not re

  • How do I TOTALLY remaove ALL tarces of Babylon beyond doing the obvious...have done all that..thanks...this is day 3 trying to get an answer

    have done all of the obvious program, file deletes, trashed and removed all surface files, swept wil malwarebites, deleted AVG....still opens Babylon Search