JTable and large CSV files

I have been looking for a clear answer (with source code) to a problem commonly asked in the forums. The question is "how do I display a large comma delimited (CSV) file in a JTable without encountering a Java heap space error?" One solution is described at http://forum.java.sun.com/thread.jspa?forumID=57&threadID=741313 but no source code is provided. Can anyone provide some example code as to how this solution works? It is not clear to me how the getValueAt(r, c) method can be used to get the (r+1)th row if only r rows are in the TableModel. I greatly appreciate any help.

Perhaps if I posted my code, I might get a little help. First, my class that extends abstract table model
public class DataTableModel extends AbstractTableModel{
     private static final long serialVersionUID = 1L;
     Object[][] data;
     DataColumnFormat[] colFormat;
     int nrows, ncols, totalRows;
     public DataTableModel(Object[][] aData, DataColumnFormat[] aColFormat, int aTotalNumberOfRows){
          data=aData;
          colFormat=aColFormat;
          nrows=data.length;
          ncols=data[0].length;
//          number of rows in entire data file.
//          This will be larger than nrows if data file has more than 1000 rows
          totalRows=aTotalNumberOfRows;
     public int getRowCount(){
          return nrows;
     public int getColumnCount(){
          return ncols;
     public String getColumnName(int aColumnIndex){
          return colFormat[aColumnIndex].getName();
     public Object getValueAt(int r, int c){
          if(colFormat[c].isDouble()){
               return data[r][c];
          return data[r][c];
     public boolean isCellEditable(int nRow, int nCol){
          return true;
     @SuppressWarnings("unchecked")
     public Class getColumnClass(int c) {
        return getValueAt(0, c).getClass();
     protected void updateData(){
//          replace values in data[][] object with new rows from large data file
}Suppose data = new Object[1000][100] but my CSV file has 5000000000 lines (to exaggerate). By my understanding getValueAt(r, c) could not go beyond r=1000 and c=100. So, how would I update data with the next 1000 lines by way of the getValueAt(r,c) method? Moreover, how would I implement this so that the table appears to scroll continuously? I know someone has a solution to this problem. Thanks.

Similar Messages

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Cannot load large CSV files in SignalExpress("Not enough memory to complete this operation" error)

    Hi guys,
    I'm new here and just  have browsed
    some of the related topics here regarding my problem but could not seem
    to find anything to help me fix this problem so I decided to post this.
    I currently have a saved waveform from an oscilloscope that is quite
    big in size(around 700MB, CSV file format) and I want to view this on
    my PC using SignalExpress. Unfortunately when I try to load the file
    using "Load/Save Signals -> Load From ASCII", I always get the "Not
    enough memory to complete this operation" error. How can we view and
    analyze large waveform files in SignalExpress? Is there a workaround on
    this? 
    Thanks,
    Louie
    P.S.>I'm very new to Signal Express and haven't modified any settings on it. 

    Hi Louie,
    Are you encountering a read-only message when you tried to save the boot.ini file? If so, you can try this method: right-click on My Computer >> Select "Properties", and go to the "Advanced" tab. Select "Settings", and on the next screen, there is a button called Edit. If you click on Edit you should be able to modify the "/3GB" tag in boot.ini. Are you able to change it in this manner? After reboot, you can reload the file to see if it helps.
    To open a file in SignalExpress, a contiguous chunk of memory is required. If SignalExpress cannot find a contiguous memory chunk that is large enough to hold this file, then an error will be generated. This can happen when fragmentation occurs in memory, and fragmentation (memory management) is managed by Windows, so unfortunately this is a limitation that we have.
    As an alternative, have you looked at NI DIAdem before? It is a software tool that allows users to manage and analyze large volumes of data, and has some unique memory management method that lets it open and work on large amounts of data. There is an evaluation version which is available for download; you can try it out and see if it is suitable for your application. 
    Best regards,
    Victor
    NI ASEAN
    Attachments:
    Clipboard01.jpg ‏181 KB

  • Drag and Drop CSV file onto a table in Number

    Hi everyone !
    I really love the new version of iWork. However I can't find a feature I was heavilly using which is the drag and drop of a CSV file into numbers which creates the table associated to this file.
    Is there anyone who has find a way to re-activate this feature? Or will I have to open new number spreadsheat each time I want to import CSV ?
    Thanks for your help !
    Vincent.

    Jerrold Green1 wrote:
    I have very few photos in my Contacts, so I saw the problem on my first try.
    Exactly what happens here too when dragging from Contacts; I typically don't have photos there either. I submitted a "bug" report via Provide Numbers Feedback.
    Do you still get misalignment of column headers when dragging or pasting csv or tsv (i.e. not from Contacts)?  I've had pretty good results with csv and tsv here in v. 3.2, a welcome change from the earlier releases.
    SG

  • Uploading Large csv file from Local File

    I have a 6GB csv file which I created on my local machine which is named TrainDF.csv.  I can't upload it directly as it exceeds the 1.95GB size limit uncompressed.  However, I tried saving it as an RData file (as well as a zip file of the RData
    file) and uploading that, however my R code throws an "unable to open connection" error when I try to load this data with an R Script with this code.
    load("src/TrainDF.RData"); maml.mapOutputPort("TrainDF");
    I have even tried:
    load("/src/TrainDF.RData");
    load("./src/TrainDF.RData");
    load("~/src/TrainDF.RData");
    TrainDF <- load("/src/TrainDF.RData");
    TrainDF <- load("./src/TrainDF.RData");
    TrainDF <- load("~/src/TrainDF.RData");
    Same error for everything.  What would be the simplest most straight-forward way to get a 6GB csv file usable as an input for an experiment from my local machine?  I may need step by step instructions on this one depending on the answer. 
    Also, if a complete step by step answer to this question can be found somewhere please post a link to it.
    Thanks in advance,
    Bob
    P.S.  I know the data is actually there,  I am able to download it.

    Yes, all appears to work now.  I have a bad habit of interchanging a csv file with a dataframe in R when I talk because they are so trivial to interchange.  I meant a saved (RData) object containing a dataframe that was originally loaded from
    a csv file.  My problems were not knowing specifically how to write the path to the src folder and I didn't even think about using a different input.  Hopefully documentation should be coming out in the future which clarifies these things. 
    In my opinion information on how to get info in and out of Azure ML is somewhat lacking.  Although, I must say this is truly impressive work you guys are doing and not only that you are getting it done at a mind-boggling pace.  So good job on that. 
    Thanks for your great help and fast response!

  • HTMLDB_tools, processing a very large csv file in a collection

    Hi Guys,
    I'm new to APEX and trying to load approx 1,000,000 rows into a table from a csv file, does anyone have a better way of doing it than htmldb_tools.parse_file (which is very slow)
    Cheers

    It's not Apex, but you could use SQL*Loader. It's really very fast!
    greets, Dik

  • Import and view CSV Files into OTM

    Hi All,
    Can you please help me in importing CSV files into Oracle Test Manager and to view the imported files in OTM.

    Hi,
    I have just googled and find the link :)
    http://oracleats.com/blog/?p=785

  • Quicktime and large mpeg2 files

    can Quicktime export large (@5 or 6 gigabyte) mpeg2 files?
    I put a DVD (non-commercial, non-copyrighted) onto my hard drive and it plays back fine with other apps. like Media Player, but Quicktime can't play or export it. It works for about the first 15 minutes of the video then the picture freezes though the sound continues. Also the data size is messed up and is listed as a negative number (such as "-2.34257934359".
    thanks
    dell   Windows XP  

    QT doesn't export mpeg2 and has some limit in file length. This freeware will play large mpeg2 files as long as you have purchased the Apple mpeg2playbackcomponent ($20 from Apple Store). The component doesn't play the longer files but this freeware apparently does if you have the component installed. Mpeg StreamClip
    With the component and Streamclip you can export both sound and video to other formats but you can't export to mpeg2. Streamclip does allow some editing of mpeg2 files

  • LV 8.2 Vs. LV 7.1 and large VI file size

    Hi,
    I upgraded a large SW developed in LV 7.1 to LV 8.2. After some error corrections I saw that a large Vi in LV 7.1 (a Vi template .vit) that was 18 MB, in LV8.2 was 2.6 MB !
    The editing of this Vi is very very very slow, and it's unacceptable. When I save it I can have a coffee break !!
    Also, to open the reference to this Vi it's necessary 35-40s but in LV 7.1 2 seconds were enough.
    Finally I see also a reduction of the file dimension of some Vi that in LV 7.1 had a large file size (5-8 MB).
    Some suggestion about this big-big problem ?
    It's possible that LV 8.2 compress (like ZIP) large VI ?
    Thanks and regards

    The extreme difference in file size indicates that LabVIEW is able to "squeeze" your vi down much more than the typical %50 that ZLIB is capable of. LabVIEW 8.0 introduced compression of the vi. See this LAVA thread. If your VI contains items that are highly compressible, (BMPs, large strings of repeating text?) then the CPU must work more to "enflate" and "deflate" the file when loaded or saved (development environment slower). Your reported values lead me to beleive there is "SOMETHING" in your VI that could be improved.
    The other solution is to improve the computer's effeciency; start by cleaning up the hard disk and optimizing it, then upgrade memory, then as a last resort upgrade the CPU. LV 8.2 takes more memory. If your physical RAM is not up to par and if your disk drive is full, you might be paging to disk alot and that would slow you down.
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

  • How do I split a Large CSV file into Multiple CSV's Using Powershell

    I am a novice at powershell but this looks to be the best tool to do this task. have a csv file that looks like this:
    Date,Policy,Application
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    Is it possible to split this CSV into multiple CSV's based on "Application".
    Lets say the output might look like:
    None.csv
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    AppBiz.csv
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    PeopleBiz.csv
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    Any help would be greatly appreciated

    I think this might be what you want:
    Import-Csv applications.csv |
    Group Application |
    foreach {
    $_.Group | Export-Csv "$($_.Name).csv" -NoTypeInformation
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "
    Very nice! 4x faster..
    I doubt the OP will get what you just did there..
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • How to add dimension in dimension_values.csv and schema.csv file

    How to add  one dimension in dimension_values.csv and schema.csv file

    I'm not sure on the equation Ventzo, so please tell me if I've got this wrong.
    This should create a text file on the desktop with a name of the folder you are in plus the time.
    #target bridge
    if( BridgeTalk.appName == "bridge" ) { 
    var quadDetails = MenuElement.create("command","Get  Quadrature", "at the end of Tools","QuadDetails");
    quadDetails .onSelect = function () {
    var file = new File(Folder.desktop +"/" + decodeURI(Folder(app.document.presentationPath).name) + time() +".txt");
    file.open("w", "TEXT", "????");
    $.os.search(/windows/i)  != -1 ? file.lineFeed = 'windows'  : file.lineFeed = 'macintosh';
    file.open('w');
    var sels = app.document.selections;
    for(var a in sels){
    var myThumb = new Thumbnail( sels[a]);
    var Resolution = myThumb.core.quickMetadata.xResolution;
    if(Resolution == 0) Resolution = 72;
    var CM = Resolution /2.54;
    var m = CM*100;
    var Height = (myThumb.core.quickMetadata.height/m);
    var Width = (myThumb.core.quickMetadata.width/m);
    file.writeln(decodeURI(sels[a].spec.name) + " - " +(Height * Width).toFixed(4));
    file.close();
    alert(decodeURI(file.name) + "  Has been created on the Desktop");
    function time(){
    var date = new Date();
    var d  = date.getDate();
    var day = (d < 10) ? '0' + d : d;
    var m = date.getMonth() + 1;
    var month = (m < 10) ? '0' + m : m;
    var yy = date.getYear();
    var year = (yy < 1000) ? yy + 1900 : yy;
    var digital = new Date();
    var hours = digital.getHours();
    var minutes = digital.getMinutes();
    var seconds = digital.getSeconds();
    var amOrPm = "AM";
    if (hours > 11) amOrPm = "PM";
    if (hours > 12) hours = hours - 12;
    if (hours == 0) hours = 12;
    if (minutes <= 9) minutes = "0" + minutes;
    if (seconds <= 9) seconds = "0" + seconds;
    todaysDate = "-" + hours + "_" + minutes + "_" + seconds + amOrPm;
    return todaysDate.toString();

  • How to validate and import csv file to mysql? (Concurrent update)

    Hi
    I have a csv file with various columns of data which need to be validated and display in a jsp page. After the validation, the data needs to be imported into mysql DB.
    May I ask how can I retrieve those data, validate them and import them row by row to DB?
    Also, the program needs to handle concurrent update of DB. i.e. user A can import data through jsp page while user B can upload the csv file to insert into table.
    Thanks

    techissue2008 wrote:
    May I ask how can I retrieve those dataGoogle on "jsp file upload". I highly recommend you the Apache Commons FileUpload API.
    validate them Just write Java logic accordingly. You may find the java.lang.String API useful.
    and import them row by row to DB? You can use the JDBC API for this. There is a JDBC tutorial here at Sun.com. Google can find it.
    Also, the program needs to handle concurrent update of DB. i.e. user A can import data through jsp page while user B can upload the csv file to insert into table. Just write threadsafe code.

  • Out of Memory Error and large video files

    I created a simple page that links a few large video files (2.5 gig) total size. On Preview or FTP upload Mues crashes and gives a Out of Memory error. How should we handle very large files like this?

    Upload the files to your host using an FTP client (i.e. Filezilla) and hyperlink to them from within your Muse site.
    Muse is currently not designed to upload files this large. The upload functionality takes the simple approach of reading an entire linked file into RAM and then uploading it. Given Muse is currently a 32-bit application, it's limited to using 2Gb of RAM (or less) at any given time regardless of how much RAM you have physically installed. We should add a check to the "Link to File..." feature so it rejects files larger than a few hundred megs and puts up a explanation alert. (We're also hard at work on the move to being a 64-bit app, but that's not a small change.)
    In general your site visitor will have a much better experience viewing such videos if you upload them to a service like YouTube or Vimeo rather than hosting them yourself. Video hosting services provide a huge amount of optimization of the delivery of video that's not present for a file hosted on standard hosting (i.e. automatic resizing of the video to the appropriate resolution for the visitor's device (rather than potentially downloading a huge amount of unneeded data), transcoding to the video format required by the visitor's browser (rather than either having to due so yourself or have some visitors unable to view your video), automatic distribution of a highly viewed video to multiple data centers for better performance in multiple geographies, and no doubt tons of other stuff I'm not thinking of or am ignorant of.

  • DRM and configuring CSV files for OBIEE apps

    How does DRM impact configuring the CSV files for Informatica on the OBIEE apps?
    Do we still need to configure the buckets in the CSV for chart of accounts, custom calendar etc?

    Hi,
    I am also trying to integrate DRM with OBIEE. can you please provide me more details on this. also i am trying to build a report with tree formate to represent DRM hierarchy. can you throw some light on this?

  • Sharepoint Foundation 2010 and Large Media Files?

    I have already seen links giving instructions in how to raise the default 50MB upload limit to up to 2GB, but it doesn't seem like a good idea based on all the caveats and warnings about it.
    If we need to occasionally allow access to external SharePoint users to files of size much larger than 50MB (software application installation files, audio recordings and video recordings) despite most documents being much less than 50MB (Office documents)
    what is the best solution that does not involve using third party external services such as OneDrive, Azure or Dropbox because we must host all of our files on premises.
    The SharePoint server is Internet accessible, but it requires AD authentication to log in and access files.
    Some have recommended file server shares for the larger files, but the Internet users only have AD accounts that are used to access the SharePoint document libraries, but they do not have VPN that would be needed to access an internal file share.
    I have heard of FTP and SFTP, but the users need something more user-friendly, doesn't require anything applications than their browser and that will use their existing AD credentials and we need to have auditing of who is uploading and downloading files.
    Is there any other solution other than just raising the file limit to 1 or 2GB just for a handful of large files in a document library full of mostly sub 50MB files?

    I had a
    previous post about performance impacts on the upload/download of large content on SharePoint.
    Shredded storage has got little to do with this case, as it handles Office documents shared on SharePoint, being edited on Office client, and saving back only the differences, therefore, lightening up the throughput.
    These huge files are not to be edited, they're uploaded once.
    It's a shame to expose this SharePoint Farm on the extranet, just because of a handful of huge files.
    Doesn't the company have a webserver on the DMZ hosting a extranet portal ? Can't that extranet portal feature a page that discovers those files intended to be downloaded from the outside, and then, act as a reverse proxy to stream them out ?
    Or, it may be a chance to build a nice Portal site on SharePoint.

Maybe you are looking for

  • Exchange 2010 , Windows 2008 R2 and framework 3.5 SP1

    Hi all , I have questation about protection exchange server 2010 with framework 3.51 SP1 running in the windows 2008r2 serveru . The DPM server is instaled in the windows 2012r2 server and supported version in this os is 2012R2 RU5 . After install i

  • How do you highlight several cells with the mouse in numbers

    how do you highlight several cells with the mouse in numbers

  • 1.83GHz vs. 2.0GHz

    im looking at getting a MacBook within the next week and was curious as to whether its worth it to purchase the 2.0GHz version over the 1.83GHz version. I know the 2.0Ghz one has the superdrive, which i would probably only use occasionally and actual

  • Openbox Themes/Questions

    Openbox looks highly intriguing due to its high configurability and low overhead. There was an old thread where users showed off there desktops and posted their confs, anyone know where that is? Also, is it meant to be run in place of gnome or on top

  • Duplicating Rule Anomaly

    In CS3, I am duplicating a .3 pt stroke line by option dragging and once finished the line now becomes a 1 pt stroke. It happens sporadically with the project that I am currently involved. Not really looking for a solution, and not sure that it is a