Loop through a csv file and return the number of rows in it?
What would be simplest way to loop through a csv file and
return the number of rows in it?
<cffile action="read" file="#filename#" output="#csvstr#"
>
<LOOP THROUGH AND COUNT ROWS>
ListLen(). Use chr(13) as your delimiter
Similar Messages
-
Loading a CSV file and accessing the variables
Hi guys,
I'm new to AS3 and dealt with AS2 before (just getting the grasp when the change it).
Is it possible in AS3 to load an excel .csv file into Flash using the URLLoader (or ???) and the data as variables?
I can get the .csv to load and trace the values (cell1,cell2,cell3....) but I'm not sure how to collect the data and place it into variables.
Can I just create an array and access it like so.... myArray[0], myArray[1]? If so, I'm not sure why it's not working.
I must be on the completely wrong path. Here's what I have so far....
var loader:URLLoader = new URLLoader();
loader.dataFormat = URLLoaderDataFormat.VARIABLES;
loader.addEventListener(Event.COMPLETE, dataLoaded);
var request:URLRequest = new URLRequest("population.csv");
loader.load(request);
function dataLoaded(evt:Event):void {
var myData:Array = new Array(loader.data);
trace(myData[i]);
Thanks for any help,
Skyjust load your csv file and use the flash string methods to allocate those values to an array:
var myDate:Array = loader.data.split(","); -
Read a csv file and read the fiscal yr in the 4th pos?
Hello ABAP Experts,
how to write a code for read a csv file and read the fiscal year in the 4th position.
any suggestions or code highly appreciated.
Thanks,
BWerHi Bwer,
Declare table itab with the required fields...
Use GUI UPLOAD to get the contents of the file (say abc.csv) in case if the file is on the presentation server...
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
filename = 'c:\abc.csv'
FILETYPE = 'ASC'
WRITE_FIELD_SEPARATOR = 'X'
tables
data_tab = itab
EXCEPTIONS
FILE_WRITE_ERROR = 1
NO_BATCH = 2
OTHERS = 22
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Use OPEN DATASET in case if the file is on the application server..
After that USE SPLIT command at comma to get the contents of the 4th field...
Regards,
Tanveer.
<b>Please mark helpful answers</b> -
// Code Help need .. in Reading CSV file and display the Output.
Hi All,
I am a new Bee in code and started learning code, I have stared with Console application and need your advice and suggestion.
I want to write a code which read the input from the CSV file and display the output in console application combination of first name and lastname append with the name of the collage in village
The example of CSV file is
Firstname,LastName
Happy,Coding
Learn,C#
I want to display the output as
HappyCodingXYZCollage
LearnC#XYXCollage
The below is the code I have tried so far.
// .Reading a CSV
var reader = new StreamReader(File.OpenRead(@"D:\Users\RajaVill\Desktop\C#\input.csv"));
List<string> listA = new List<string>();
while (!reader.EndOfStream)
var line = reader.ReadLine();
string[] values = line.Split(',');
listA.Add(values[0]);
listA.Add(values[1]);
listA.Add(values[2]);
// listB.Add(values[1]);
foreach (string str in listA)
//StreamWriter writer = new StreamWriter(File.OpenWrite(@"D:\\suman.txt"));
Console.WriteLine("the value is {0}", str);
Console.ReadLine();
Kindly advice and let me know, How to read the column header of the CSV file. so I can apply my logic the display combination of firstname,lastname and name of the collage
Best Regards,
Raja Village Sync
Beginer CoderVery simple example:
var column1 = new List<string>();
var column2 = new List<string>();
using (var rd = new StreamReader("filename.csv"))
while (!rd.EndOfStream)
var splits = rd.ReadLine().Split(';');
column1.Add(splits[0]);
column2.Add(splits[1]);
// print column1
Console.WriteLine("Column 1:");
foreach (var element in column1)
Console.WriteLine(element);
// print column2
Console.WriteLine("Column 2:");
foreach (var element in column2)
Console.WriteLine(element);
Mark as answer or vote as helpful if you find it useful | Ammar Zaied [MCP] -
How do you return the number of Rows in a ResultSet??
How do you return the number of Rows in a ResultSet? It's easy enough to do in the SQL query using COUNT(*) but surely JDBC provides a method to return the number of rows.
The ResultSetMetaData interface provides a method for counting the number of columns but nothing for the rows.
ThanksNo good way before JDBC2.0. u can use JDBC2.0 CachedRowSet.size() to retrieve the number of rows got by a ResultSet.
-
How to Compare 2 CSV file and store the result to 3rd csv file using PowerShell script?
I want to do the below task using powershell script only.
I have 2 csv files and I want to compare those two files and I want to store the comparision result to 3rd csv file. Please look at the follwingsnap:
This image is csv file only.
Could you please any one help me.
Thanks in advance.
By
A Path finder
JoSwa
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful"
Best Online JournalNot certain this is what you're after, but this :
#import the contents of both csv files
$dbexcel=import-csv c:\dbexcel.csv
$liveexcel=import-csv C:\liveexcel.csv
#prepare the output csv and create the headers
$outputexcel="c:\outputexcel.csv"
$outputline="Name,Connection Status,Version,DbExcel,LiveExcel"
$outputline | out-file $outputexcel
#Loop through each record based on the number of records (assuming equal number in both files)
for ($i=0; $i -le $dbexcel.Length-1;$i++)
# Assign the yes / null values to equal the word equivalent
if ($dbexcel.isavail[$i] -eq "yes") {$dbavail="Available"} else {$dbavail="Unavailable"}
if ($liveexcel.isavail[$i] -eq "yes") {$liveavail="Available"} else {$liveavail="Unavailable"}
#create the live of csv content from the two input csv files
$outputline=$dbexcel.name[$i] + "," + $liveexcel.'connection status'[$i] + "," + $dbexcel.version[$i] + "," + $dbavail + "," + $liveavail
#output that line to the csv file
$outputline | out-file $outputexcel -Append
should do what you're looking for, or give you enough to edit it to your exact need.
I've assumed that the dbexcel.csv and liveexcel.csv files live in the root of c:\ for this, that they include the header information, and that the outputexcel.csv file will be saved to the same place (including headers). -
Read two CSV files and remove the duplicate values within them.
Hi,
I want to read two CSV files(which contains more than 100 rows and 100 columns) and remove the duplicate values within that two files and merge all the unique values and display it as a single file.
Can anyone help me out.
Thanks in advance.kirthi wrote:
Can you help me....Yeah, I've just finished... Here's a skeleton of my solution.
The first thing I think you should do is write a line-parser which splits your input data up into fields, and test it.
Then fill out the below parse method, and test it with that debugPrint method.
Then go to work on the print method.
I can help a bit along the way, but if you want to do this then you have to do it yourself. I'm not going to do it for you.
Cheers. Keith.
package forums.kirthi;
import java.util.*;
import java.io.PrintStream;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import krc.utilz.io.ParseException;
import krc.utilz.io.Filez.LineParser;
import krc.utilz.io.Filez.CsvLineParser;
public class DistinctColumnValuesFromCsvFiles
public static void main(String[] args) {
if (args.length==0) args = new String[] {"input1.csv", "input2.csv"};
try {
// data is a Map of ColumnNames to Sets-Of-Values
Map<String,Set<String>> data = new HashMap<String,Set<String>>();
// add the contents of each file to the data
for ( String filename : args ) {
data.putAll(parse(filename));
// print the data to output.csv
print(data);
} catch (Exception e) {
e.printStackTrace();
private static Map<String,Set<String>> parse(String filename) throws IOException, ParseException {
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(filename));
CsvLineParser.squeeze = true; // field.trim().replaceAll("\\s+"," ")
LineParser<String[]> parser = new CsvLineParser();
int lineNumber = 1;
// 1. read the column names (first line of file) into a List
// 2. read the column values (subsequent lines of file) into a List of Set's of String's
// 3. build a Map of columnName --> columnValues and return it
} finally {
if(reader!=null)reader.close();
private static void debugPrint(Map<String,Set<String>> data) {
for ( Map.Entry<String,Set<String>> entry : data.entrySet() ) {
System.out.println("DEBUG: "+entry.getKey()+" "+Arrays.toString(entry.getValue().toArray(new String[0])));
private static void print(Map<String,Set<String>> data) {
// 1. get the column names from the table.
// 2. create a List of List's of String's called matrix; logically [COL][ROW]
// 3. print the column names and add the List<String> for this col to the matrix
// 4. print the matrix by inerating columns and then rows
} -
Webi report - save as .csv is bringing double the number of rows.
Hello all,
I am running a webi report and then saving it as .csv file to my desktop but when I save the file it is giving me double the number of rows. In my report I have employee, employee key, attributes and Org unit and some measures.
The .csv file has one additional column at the end which does not have any header and that column has "1" in the rows which should be legitimate and "0" in the rows which are so called duplicates, in these duplicate rows there is all data except it is missing the employee key and org unit key.
If I save as excel I get the right number of rows.
has any one seen this issue before?
Thanks in advance,Exporting to csv is different from exporting to csv.
If you have any filter on your crosstab or table and you export it to excel it will show you your data according to this filter. Let's say in your table you have amount 0 and 1 and you filter that column so it will show only those records where amount = 1. If you export it to excel you'll get what you see in your WebI report, that's only records with amount = 1.
But if you export to csv the same Webi report with the same crosstab or table filtered by amount=1, the csv export will ignore this filter and your csv will include amount 0 and amount 1 records.
I don't know if that's a WebI's bug, but that's what has happened to me.
A workaround could be adding your filter directly in the query pane, not filtering your table columns.
Also check your query pane to see what object is bringing that extra column in your table. -
Displayin message and fixing the number of rows in a table
Hi Experts,
I have a requirement like this
there is a table, where i need to diplay a message when there are no records and no rows should be visible on that table.
Also, when the records are populated from the context, i need to fix the number of rows in that table and display the records.
Please let me know hw this can be achieved.
Also in the table I have Link to URL, please let me know how to handle this reference, when i set the reference Property, its giving me an error stating that the file doesn't exist when the table gets loaded.
Thanks in Advance
Regards,
PalaniHi
Oh!! You should to explain it at first thread itself that you want to display a JPG imange or other WDWebResourceTypetaken form the backend.
1. So this is not at all a URL
2. You have to convert binary data to WDWebResource then display it in either Image UI element or other (like PDF , txt etc)
3
try
// Read the datasource of the FileUpload
IWDResource res = wdContext.currentContextElement().getResource();
InputStream in = res.read(false);
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
int length;
byte[] part = new byte[10 * 1024];
while ((length = in.read(part)) != -1)
bOut.write(part, 0, length);
in.close();
bOut.close();
IPrivateUploadCompView.IImageTableElement ele = wdContext.nodeImageTable().createImageTableElement();
ele.setImage(wdContext.currentContextElement().getResource().getUrl(0));
ele.setText(res.getResourceName());
wdContext.nodeImageTable().addElement(ele);
catch (Exception e)
wdComponentAPI.getMessageManager().reportWarning(e.toString());
Here I assume that you convert that data to IWDResource type or
4.
WDWebResource.getWebResource(wdContext.currentContextElement().getresource(), type);
// getResource of type binary which u read from BAPI and set it in local context , type is MIMETYPE or hardcode it as "JPG"
5. Further help
[Help1|To Displayan Image in Webdynpro Java from Standard Function Module;
[Help2|http://wiki.sdn.sap.com/wiki/display/KMC/GettinganimagefromKMDocumentstobeusedinWeb+Dynpro]
[Help3|http://wiki.sdn.sap.com/wiki/display/KMC/GettinganimagefromKMDocumenttobeusedinWeb+DynPro]
It might code look strange at first , please do some reserch in SDN ,I did my best at this level.
Best Regards
Satish Kumar -
Automator - Loop through a text file and process data through a 3rd party software
Just stumbled on Automator the other day (I am a mac n00b) and would like to automate the processing of a text file, line-by-line, using a third party tool. I would like Automator to loop through the text file one line at a time, copy the string and keep as a variable. Next, place the variable data (copied string) into the text field of the 3rd party software for processing. Once the processing is complete, I would like Automator to fetch the next line/string for processing. I see items like "copy from clipboard" and "variables" within the menu but I am not finding much documentation on how to utilizle this tool. Just hear how potentially powerful it is.
The 3rd party software is not a brand name, just something made for me to process text. I may have to use mouse clicks or tabs + [return] to navigate with Automator. A term I heard on Bn Walldie's itunes video series was "scriptable software" for which I don't think this 3rd party app would be.
Kind regards,
jwGood news and bad news...
The good news is that it should be entirely possible to automate your workflow.
The bad news is that it will be a nightmare to implement via Automator, if it's even possible.
Automator is, essentially a pretty interface on top of AppleScript/Apple Events, and with the pretty interface comes a certain stranglehold on features. Knowing how to boil rice might make you a cook, but understanding flavor profiles and ingredient combinations can make you a chef, and it's the same with AppleScript and Automator. Automator's good at getting you from points A to B but if there are any bumps in the road (e.g. the application you're using isn't scriptable) then it falls apart.
What I'm getting at is that your requirements are pretty simple to implement in AppleScript because you can get 'under the hood' and do exactly what you want, as opposed to Automator's restricted interface.
The tricky part is that if no one else can see this app it's going to be hard to tell you what to do.
I can give you the basics on reading a file and iterating through the lines of text in it, and I can show you how to 'type' text in any given application, but it may be up to you to put the pieces together.
Here's one way of reading a file and working through each line of text:
-- ask the user for a file:
set theFile to (choose file)
-- read the file contents:
set theFileContents to (read file theFile)
-- break out the lines/paragraphs of text:
set theLines to paragraphs of theFileContents
-- now iterate through those lines, one by one:
repeat with eachLine in theLines
-- code to perform for eachLine goes here
end repeat
Once you have a line of text (eachLine in the above example) you can 'type' that into another application via something like:
tell application "AppName" to activate
tell application "System Events"
tell process "AppName"
keystroke eachLine
end tell
end tell
Here the AppleScript is activating the application and using System Events to emulate typing the contents of the eachLine variable into the process. Put this inside your repeat loop and you're almost there. -
Stage tab delimited CSV file and load the data into a different table
Hi,
I pretty new to writing PL/SQL packages.
We are using Application express for our development. We get CSV files which is stored as a BLOB content in a table. I need to write a trigger that would get executed once the user the uploads the file and parse thru the Blob content and upload or stage the data in a different table.
I would like to see if there is any tutorial or article that could explain the above process with the example or sample code to do the same. Any help in this regard will be highly appreciated.Hi,
This is slightly unusual but at the same time easy to solve. You can read through a blob using the dbms_lob package, which is one of the Oracle supplied packages. This is presumably the bit you are missing, as once you know how you read a lob the rest is programming 101.
Alternatively, you could write the lob out to a file on the server using another built in package called utl_file. This file can be parsed using an appropriately defined external table. External tables are the easiest way of reading data from flat files, including csv.
I say unusual because why are you loading a csv file into a blob? A clob is almost understandable but if you can load into a column in a table why not skip this bit and just load the data as it comes in straight into the right table?
All of what I have described is documented functionality, assuming you are on 9i or greater. But you didn't provide a version so I can't provide a link to the documentation ;)
HTH
Chris -
Read csv file and insert the data to a list in sharePoint
Hi everyone,
i wrote a code that reads from a csv file all his data but i need also to insert all the data to a new list in sharePoint.
how can i do this?
plus, i need to read from the csv file once a day in specific hour. how can i do this? thank you so much!!Did you look at the example I posted above?
ClientContext in CSOM will allow you to get a handle on the SharePoint objects;
http://msdn.microsoft.com/en-us/library/office/ee539976(v=office.14).aspx
w: http://www.the-north.com/sharepoint | t: @JMcAllisterCH | YouTube: http://www.youtube.com/user/JamieMcAllisterMVP -
Calling a file and counting the number of words in it-please help!!
* @(#)WordCounterTwo.java
* WordCounterTwo application
* @author
* @version 1.00 2007/11/17
import java.util.Scanner;
public class WordCounterTwo {
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
String fileName;
int countWords;
System.out.println("Please enter the name of the file: ");
fileName = keyboard.nextLine();
System.out.println(countWords.lastIndexOf());
}I am getting error message as follows:
cannot find symbol constructor StringTokenizer() on line
I am asking the user to enter the name of a file, and the output is supposed to display the number of words in the file that chosen. I'm not sure if I am going about this right way, and not sure why I am getting the erorr messages.
* @(#)WordCounter.java
* WordCounter application
* @author
* @version 1.00 2007/11/17
import java.util.Scanner;
import java.util.StringTokenizer;
public class WordCounter {
public static void main(String[] args) {
String sentence;
Scanner keyboard = new Scanner(System.in);
StringTokenizer words = new StringTokenizer(); //line 17
int numberWords;
System.out.println("Please enter a sentence");
sentence = keyboard.nextLine();
sentence = words.nextToken();
while (words.hasMoreTokens())
numberWords++;
System.out.println(numberWords);
} -
CLR Procedure and returning large number of rows
I have a CLR stored procedure coded in C# that retrieves data from a web service, and returns that data using SendResultsStart/SendResultsRow/SendResultsEnd. This all works fine, except when the data from the web service is tens or even thousands of records.
The code itself takes about 3 minutes on average to do all it's work with around 50000-60000 records, but the procedure does not return in SSMS for about another 10-15 minutes, during which time the CPU and memory usage go up significantly.
To rule out any of the CLR code as the culprit, I created a very simple CLR procedure that just loops to return 100000 records with int and nvarchar(256) fields with the current count, and "ABC" followed by the count. Here is the code:
[Microsoft.SqlServer.Server.SqlProcedure]
public static void ABC()
System.Diagnostics.Stopwatch ExecuteTimer = System.Diagnostics.Stopwatch.StartNew();
SqlMetaData[] ResultMetaData = new SqlMetaData[2];
ResultMetaData[0] = new SqlMetaData("count", SqlDbType.Int);
ResultMetaData[1] = new SqlMetaData("text", SqlDbType.NVarChar, 256);
SqlContext.Pipe.SendResultsStart(new SqlDataRecord(ResultMetaData));
for (int x = 0; x < 100000; x++)
SqlDataRecord ResultItem = new SqlDataRecord(ResultMetaData);
ResultItem.SetValue(0, x);
ResultItem.SetValue(1, "ABC" + x.ToString());
SqlContext.Pipe.SendResultsRow(ResultItem);
SqlContext.Pipe.SendResultsEnd();
TimeSpan ExecTime = ExecuteTimer.Elapsed;
SqlContext.Pipe.Send("Elapsed Time: " + ExecTime.Minutes.ToString() + ":" + ExecTime.Seconds.ToString() + "." + ExecTime.Milliseconds.ToString());
I then executed procedure ABC in SSMS, and it took 21 minutes to return. All of the data rows were visible after just a couple of seconds, but the query continued to run as the CPU and memory went up again.
Is this really how long it should take to return 100000 rows, or am I missing something? Is there a better approach than using SendResultsStart/SendResultsRow/SendResultsEnd?
I've googled this to death and haven't found anything that helped or even explained why this is.
I would greatly appreciate any suggestions or alternate methods to achieve this faster.
Thanks!
AlexWhen you create a new object, space on the garbage-collected heap is allocated for that object, and the address will be stored in a reference. Some time later, there will no longer be any references that hold the address of the allocated object. It doesn't
matter whether the reference count went to 0 because the reference was set to null or because the reference was on the stack and is no longer in lexical scope, the end result is the same: the garbage collector will, at some point in time, have to perform the
book-keeping operations necessary to identify that the space allocated for that now-unreferenced object can be re-used. When, on the other hand, you only create a single SqlDataRecord object and hold onto the reference, all of the book-keeping operations
associated with creating 100,000 objects are eliminated. This is why the documentation for the SqlDataReader class advises that:
When writing common language runtime (CLR) applications, you should re-use existing
SqlDataRecord objects instead of creating new ones every time. Creating many new
SqlDataRecord objects could severely deplete memory and adversely affect performance. -
Universe object that returns the number of rows in table?
Is it possible to create a Universe object to support the following SQL query:
SELECT * FROM (SELECT ROWNUM rownum1 FROM TABLE) WHERE rownum1 = (SELECT (MAX(ROWNUM)) FROM TABLE)Amr,
Hey dude, welcome back, long time no hear from you.
You were on the right track with "number of rows", but here is the correct syntax:
=NumberOfRows([Query 1])
Thanks,
John
Maybe you are looking for
-
How to declare a global variable from a PL/SQL package
Hi All, Using a global variable is a bad practise for all language. However, in my case, I have a PL/SQL package defines all constants, and I want to use them directly via SQL statement, for instance, PACKAGE my_const IS DEFAULT_ZIP_CODE CONSTAN
-
Regarding copying the standard program
HI Everyone, I am copying standard ri_order_operation_list program into custom z program as my requirement is to add some addditional fields in selection criteria and in the layout . standard program makes use of fill select option and select meth
-
Hi, Is it possible to get a list of all the queries which are assigned to a specific role? I have a role and i want to generate a list of the queries in it via ABAP. Regards, Karen
-
Difference Vendor user and Normal application user
Hi I need to know what is the difference between the Vendor user and Normal Application user in R12. Could you please tell me any difference or the reference link as soon as possible. Thanks Vivek
-
Can't install Suffit(s)
I have A Cube & a G4 Tower. I can bring up either one in OS 9.1, OS 10.2.8, or OS 10.3.9. Stuffit 7.0.3 is available for OS 9.1, but I've had trouble unstuffing with the OS X systems. I've downloaded Stuffit 8.0.2 (For OS 10.2.8) and Stuffit 10.0.2 (