Problem reading and ploting large lvm files

Hi everyone,
I have some large .lvm files that I need to process offline; however, the files are quite large and I am regulalry getting "out of memory" messages (7 channels sampled at 4k for 15 minutes or so, maybe longer)... I have managed to reduce the rate this message occurs by first converting the lvm files to tdms, then potting the tdms data, but I still get the "out of memory" error message on somewhat regularly.. .I also down sample the data back to 2k, but that dont help a great deal.
Any suggestions on how I can handle this data? I have read a number of online resources related to managing large data sets (e.g. http://www.ni.com/white-paper/3625/en/) but I am not sure how to implement these suggestions.
Basically, I want to view the content of the ensure file... Then use queues to extract data subsets as selected into another while loop that will handle the analysis/processing (producer/consumer)... I do this regularly for smaller files; so the issue is mainly how to managing the large files... Decimating the data for the intial whole data plot may not work as I have spikes 10ms in width in some channels that I need to see in the main plot.
Any help would be appreciated.
Many thanks,
Jack

jcannon,  I did some quick math and I dont think you should be reaching the memory limit of LabVIEW.  However, it is possible that you are running out of contiguous memory on your computer while the program is running.  See this for a quick breif about contiguous memory.  
If I were you I would try to reduce the number of times LabVIEW coppies information in memory.  Use show buffer allocations to find out where in your code you are making coppies of memory.  
best of luck!

Similar Messages

  • Reading and Writing large Excel file using JExcel API

    hi,
    I am using JExcelAPI for reading and writing excel file. My problem is when I read file with 10000 records and 95 columns (file size about 14MB), I got out of memory error and application is crashed. Can anyone tell me is there any way that I can read large file using JExcelAPI throug streams or in any other way. Jakarta POI is also showing this behaviour.
    Thanks and advance

    Sorry when out of memory error is occurred no stack trace is printed as application is crashed. But I will quote some lines taken from JProfiler where this problem is occurred:
              reader = new FileInputStream(new File(filePath));
              workbook = Workbook.getWorkbook(reader);
              *sheeet = workbook.getSheet(0);* // here out of memory error is occured
               JProfiler tree:
    jxl.Workbook.getWorkBook
          jxl.read.biff.File 
                 jxl.read.biff.CompoundFile.getStream
                       jxl.read.biff.CompoundFile.getBigBlockStream Thanks

  • Problems with displaying read data from a .lvm file

    Hi all.
    I aquire data with the PCMCIA card 6036E. I aquire online in Labview 7 and store the data in a .lvm file. When i try to display the same data i aquired before with the "read .lvm file" express vi, the waveform chart redraws itself after an undefined time, sometimes it redraws faster and sometimes it takes longer. WHY? i have only one header per aquisition and if i restart the aquisition a new file is written! I tried almost everything. Is this a bug from labview 7?
    I really appreciate your help.
    best regards,
    Bernd

    Hi Khalid,
    Here is a simplified version of my VI and also 2 .lvm files from the logged data. Sorry for the size, but I aquire with a sample rate of 20 kS/s and my main frequency is only 0.3 Hz. In the 6MB file I aquired a little more than 3 periods. When you run the VI with this file at the beginning it redraws very often and very fast, then after some time it draws about 1 period and then it redraws again and so on. I want that the hole data from this file is displayed at once. The 900kB file is aquired with 10kS/s and about 1.5 periods. This is the last size the displaying is working with. You think it is possible that the VI only works until 1MB? But my data usually is much bigger than that. I hope you can use the data because it is
    zipped, but otherwise it wouldnt be possible to post it.
    Thank you very much for your help, I really appreciate it!
    Best regards,
    Bernd
    Attachments:
    simplified_VI_for_offline_data_display.vi ‏79 KB
    2004-07-01_Messung9.zip ‏191 KB
    2004-07-01_Messung4.zip ‏680 KB

  • Loading, processing and transforming Large XML Files

    Hi all,
    I realize this may have been asked before, but searching the history of the forum isn't easy, considering it's not always a safe bet which words to use on the search.
    Here's the situation. We're trying to load and manipulate large XML files of up to 100MB in size.
    The difference from what we have in our hands to other related issues posted is that the XML isn't big because it has a largly branched tree of data, but rather because it includes large base64-encoded files in the xml itself. The size of the 'clean' xml is relatively small (a few hundred bytes to some kilobytes).
    We had to deal with transferring the xml to our application using a webservice, loading the xml to memory in order to read values from it, and now we also need to transform the xml to a different format.
    We solved the webservice issue using XFire.
    We solved the loading of the xml using JAXB. Nevertheless, we use string manipulations to 'cut' the xml before we load it to memory - otherwise we get OutOfMemory errors. We don't need to load the whole XML to memory, but I really hate this solution because of the 'unorthodox' manipulation of the xml (i.e. the cutting of it).
    Now we need to deal with the transofmation of those XMLs, but obviously we can't cut it down this time. We have little experience writing XSL, but no experience on how to use Java to use the XSL files. We're looking for suggestions on how to do it most efficiently.
    The biggest problem we encounter is the OutOfMemory errors.
    So I ask several questions in one post:
    1. Is there a better way to transfer the large files using a webservice?
    2. Is there a better way to load and manipulate the large XML files?
    3. What's the best way for us to transform those large XMLs?
    4. Are we missing something in terms of memory management? Is there a better way to control it? We really are struggling there.
    I assume this is an important piece of information: We currently use JDK 1.4.2, and cannot upgrade to 1.5.
    Thanks for the help.

    I think there may be a way to do it.
    First, for low RAM needs, nothing beats SAX. as the first processor of the data. With SAX, you control the memory use since SAX only processes one "chunk" of the file at a time. You supply a class with methods named startElement, endElement, and characters. It calls the startElement method when it finds a new element. It calls the characters method when it wants to pass you some or all of the text between the start and end tags. It calls endElement to signal that passing characters is over, and to let you get ready for the next element. So, if your characters method did nothing with the base-64 data, you could see the XML go by with low memory needs.
    Since we know in your case that the characters will process large chunks of data, you can expect many calls as SAX calls your code. The only workable solution is to use a StringBuffer to accumulate the data. When the endElement is called, you can decode the base-64 data and keep it somewhere. The most efficient way to do this is to have one StringBuffer for the class handling the SAX calls. Instantiate it with a big enough size to hold the largest of your binary data streams. In the startElement, you can set the length of the StringBuilder to zero and reuse it over and over.
    You did not say what you wanted to do with the XML data once you have processed it. SAX is nice from a memory perspective, but it makes you do all the work of storing the data. Unless you build a structured set of classes "on the fly" nothing is kept. There is a way to pass the output of one SAX pass into a DOM processor (without the binary data, in this case) and then you would wind up with a nice tree object with the rest of your data and a group of binary data objects. I've never done the SAX/DOM combo, but it is called a SAXFilter, and you should be able to google an example.
    So, the bottom line is that is is very possible to do what you want, but it will take some careful design on your part.
    Dave Patterson

  • Read specification cluster from LVM file

    Does anyone have a clever solution to programmatically read limit specification masks (for waveform limit tests) from an LVM file?
    I used the "Mask and Limit Testing" Express VI to create the masks, which I saved as LVM using the save-data feature.
    I don't want to use the Express VI to do the actual test because the testing is in a subVI that's called within a loop; each iteration of the loop needs to send different masks to the Limit Test.vi that's in my subVI.
    Thanks,
    -a
    Solved!
    Go to Solution.

    Hi Andy,
     The LVM is a text file so you can read that in using read file function.
    After that I search the string to find where the data starts using Match Pattern
    You then process the remainder of the file string and then use Speadsheet String to Array with a tab as its delimiter. 
    There are tabs at the start of each line and extra line-feed at the end so I used array subset to avoid those areas. This might not work each time but you can fix that when you run into a problem.
    cheers
    David
    Message Edited by David Crawford on 11-05-2009 10:01 PM
    Attachments:
    Read Mask And Limit Checl LVM File.vi ‏13 KB
    Read Mask And Limit Checl LVM File.png ‏30 KB

  • Reading and writing from a file

    I am trying to read and write to a file so that I may store an array of string and a double in my application, but I have come upon a couple of problems; first when I put my file path into the path constant it will say that the file is not there ever though it is (so I have to force the issue and when the application asks for a file I have to browse and add the files), secondly when I run the program and it reads from the file it does not display the strings or doubles immediately instead I have to hit the submit button and than they finally come up, thirdly when I reset my values in the .txt files and put in my string and double and hit submit the array jumps one spot and starts at the index of 1 (array[1])  if anyone can help I would appreciate it a lot.
    here is my code:
    Harold Timmis
    [email protected]
    Orlando,Fl
    *Kudos always welcome
    Solved!
    Go to Solution.
    Attachments:
    testingfileIO.vi ‏30 KB
    double.txt ‏1 KB
    string.txt ‏1 KB

    Harold Timmis wrote:
    I am trying to read and write to a file so that I may store an array of string and a double in my application, but I have come upon a couple of problems; first when I put my file path into the path constant it will say that the file is not there ever though it is (so I have to force the issue and when the application asks for a file I have to browse and add the files),
    I don't see that behavior at all about forcing the issue.  Why are you using Not a Path constants?  Why not turn those constants into controls you put on your front panel?
    secondly when I run the program and it reads from the file it does not display the strings or doubles immediately instead I have to hit the submit button and than they finally come up,
    Put your array indicators before your event structure rather than after.  Think dataflow.  The code pauses at the event structure waiting for an even to fire.  Only when it does (such as hitting the submit button) does the data get written to the array indicators. 
    thirdly when I reset my values in the .txt files and put in my string and double and hit submit the array jumps one spot and starts at the index of 1 (array[1])  if anyone can help I would appreciate it a lot.
    I don't see this behavior either.  How come your double.txt file has names in it, and your string.txt file has doubles in it?

  • Problem reading GoPro Hero Black 4 files (1080p 120 fps) on my iMac with Adobe Premiere Elements

    Hello,
    I have a problem reading GoPro Hero Black 4 files (1080p 120 fps) on my iMac with Adobe Premiere Elements : the file is read but the quality is quite bad. When I watch it with other software like VLC or MPC, the quality is perfect but the video is sometimes slowed or there are some jumps in the reading.
    What is the issue with this ?
    thakns for your help !
    Olivier.

    This is easy. I do it all the time. FCPX ingests GoPro footage just fine. Optimize, and you're good to go.
    BTW: A common hiccup occurs when dealing with over-cranked footage. (60fps or 120fps.) Be sure to "conform speed" to actuate the slo-mo. 'Tis also important to designate your project / timeline at 24fps or 30fps to realize the speed difference.
    I've gotten burned in the past. I'll drag my over-cranked GoPro footage to the timeline. As it is the "first" clip layed into the timeline, the project assumes THAT speed as a default. (As per my preferences.) Thus, when I "conform speed," nothing seems to happen. That's because my project is displaying at the higher rate. A 60fps timeline showing footage shot at 60fps looks "normal speed."
    FYI. (And disregard if you know all this.)

  • Read and write a .CSV file contains cirillic characters issue

    Hi guys,
    I am a developer of a web application project which uses Oracle Fusion Middleware technologies. We use JDeveloper 11.1.1.4.0 as development IDE.
    I have a requirement to get a .csv file from WLS to application running machine. I used a downloadActinLinsener in front end .jspx in order to do that.
    I use OpenCSV library to read and write .csv files.
    Here is my code for read and write the .csv file,
    public void dwdFile(FacesContext facesContext, OutputStream out) {
    System.out.println("started");
    String [] nextLine;
    try {
    FileInputStream fstream1 = new FileInputStream("Downloads/filetoberead.CSV");
    DataInputStream in = new DataInputStream(fstream1);
    BufferedReader br = new BufferedReader(new InputStreamReader(in,"UTF-8"));
    CSVReader reader = new CSVReader(br,'\n');
    //CSVReader reader = new CSVReader(new FileReader("Downloads/ACTIVITY_LOG_22-JAN-13.csv"),'\n');
    List<String> list=new ArrayList();
    while ((nextLine = reader.readNext()) != null) {
    if(nextLine !=null){
    for(String s:nextLine){
    list.add(s);
    System.out.println("list size ; "+list.size());
    OutputStreamWriter w = new OutputStreamWriter(out, "UTF-8");
    CSVWriter writer = new CSVWriter(w, ',','\u0000');
    for(int i=0;i<list.size();i++){
    System.out.println("list items"+list.get(i));
    String[] entries = list.get(i).split(",");
    writer.writeNext(entries);
    //System.out.println("list items : "+list.get(i));
    writer.close();
    } catch (IOException e) {
    e.printStackTrace();
    say the filetoberead.CSV contains following data,
    0,22012013,E,E,ASG,,O-0000,O,0000,100
    1,111211,LI,0,TABO,B,M002500003593,,,К /БЭ60072715/,КАРТЕНБАЙ
    2,07,Balance Free,3
    1,383708,LI,0,BDSC,B,НЭ63041374,,,Т /НЭ63041374/,ОТГОНБААТАР
    2,07,Balance Free,161
    It reads and writes the numbers and english characters correct. All cirillic characters it prints "?" as follows,
    0,22012013,E,E,ASG,,O-0000,O,0000,100
    1,111211,LI,0,TABO,B,M002500003593,,,? /??60072715/,?????????
    2,07,Balance Free,3
    1,383708,LI,0,BDSC,B,??63041374,,,? /??63041374/,???????????
    2,07,Balance Free,161
    can somone please help me to resolve this problem?
    Regards !
    Sameera

    Are you sure that the input file (e.g. "Downloads/filetoberead.CSV") is in UTF-8 character set? You can also check it using some text editor having a view in hex mode. If each Cyrillic character in your input file occupies a single byte (instead of two), then the file is not in UTF-8. Most probably it is in Cyrillic for Windows (CP1251).
    If this is the case, you should modify the line
    BufferedReader br = new BufferedReader(new InputStreamReader(in,"UTF-8"));toBufferedReader br = new BufferedReader(new InputStreamReader(in,"windows-1251"));Dimitar

  • How can I hide page thumbnails navigation bar at the start up of Adobe Reader and open a pdf file?

    How can I hide page thumbnails navigation bar at the start up of Adobe Reader and open a pdf file? I could not find this option under Preferences tab? Thanks

    Hey there,
    Thanks for your reply. That works for the files I do what you said. However, for files I have not done that, It still shows the navigation bar. Any idea, how to do it default for any files?
    Thanks agian

  • Programmatically reading and processing a spool file

    Hello,
    I would like to read and process a spool file generated by a background job in my ABAP program.
    Can anyone advise if this is possible in SAP and if so, the best way to do this?
    The only information I will have in my program is the job name and the date on which the job was run. I then need to find the corresponding spool and read this into an internal table.
    Any ideas on how to do this would be appreciated.
    Thanks,
    Ruby

    DATA : it_spool      TYPE STANDARD TABLE OF rsporq    .
    *&      Form  find_spool_request_id
    FORM find_spool_request_id.
      CALL FUNCTION 'RSPO_FIND_SPOOL_REQUESTS'
        EXPORTING
          allclients          = '320'
          datatype            = '*'
          has_output_requests = '*'
          rq0name             = nast-dsnam
          rq1name             = '*'
          rq2name             = '*'
          rqdest              = 'LOCL'
          rqowner             = sy-uname
        TABLES
          spoolrequests       = it_spool
        EXCEPTIONS
          no_permission       = 1
          OTHERS              = 2.
      IF sy-subrc <> 0.
        MESSAGE i000 DISPLAY LIKE 'E' WITH text-002.
        LEAVE LIST-PROCESSING.
      ENDIF.
    ENDFORM.                    " find_spool_request_id
    READ TABLE it_spool INTO wa_spool INDEX 1.
      v_spoolno = wa_spool-rqident.
    *Get Spool request attributes
      SELECT SINGLE *
        FROM tsp01
        INTO tsp01
        WHERE rqident EQ v_spoolno.
    Edited by: K.Manas on Jan 7, 2011 6:22 AM

  • How to read and upload microsoft word file into database using forms9i

    Hi,
    How to read and upload microsoft word file into oracle database using forms9i. I appretiate if anyone can send me example or atleast a sujjetion.
    Thanks in advance
    Mahesh Ragineni

    The webutil package includes the ability up upload from the client to the database. See otn.oracle.com/products/forms and click on webutil for more details.
    Regards
    Grant Ronald
    Forms Product Management

  • Can I read and search a PDF file on my IPAD

    Can I read and search a PDF file on my IPAD??

    Several apps will allow you to work with PDFs. iBooks will handle them, but there have been a few issues reported about using it for that purpose. My guess is it depends on the size and complexity of the file. Adobe has a reader in teh app store (free). Goodreader is an option, and provides functionality for searches.

  • What is the best way to read and manipulate large data in excel files and show them in Sharepoint

    Hi ,
    I have a large excel file that has 700,000 records in it. The excel file has a few columns that change every day.
    What is the best way to read the data form the excel file in fastest and most efficient way.
    2 nd Problem,
    I have one excel file that has many rows each row contain some data that has certain keywords.
    What I want is  to segregate the data of rows into respective sheets(tabs ) in the workbook.
    for example in rows have following data 
    1. Alfa
    2beta
    3 gama
    4beta
    5gama
    6gama
    7alfa
    in excel
    I want there to be 3 tabs now with each of the key words alfa beta and gamma.

    Hi,
    I don't really see any better options for SharePoint. SharePoint use other production called 'Office Web App' to allow users to view/edit Microsoft Office documents (word, excel etc.). But the web version of excel doesn't support that much records as well
    as there's size limitations (probably the default max size is 10MB).
    Regarding second problem, I think you need some custom solutions (like a SharePoint timer job/webpart ) to read and present data.
    However, if you can reduce the excel file records to something near 16k (which is supported rows in web version of excel) then you can use SharePoint Excel service to refresh data automatically in the excel file in SharePoint from some external sources.
    Thanks,
    Sohel Rana
    http://ranaictiu-technicalblog.blogspot.com

  • Problem reopening and/or refreshing PDF files using Acrobat Reader V. 8.0

    I upgraded to Adobe Acrobat Reader V. 8.0 and have encountered problems refreshing PDF files in the Safari 2.0.4 browser when I am logged into the secure area for Value Line Publishing. Some of the PDF Files in this area can be opened and/or refreshed multiple times. Other PDF files in this area allow me to open them once. I cannot reopen those previously viewed files (or refresh the screen) -- after refreshing, the window changes to a blank white screen, and the files do not reopen.
    I am perplexed because I do not have difficulty refreshing the PDF files (within Safari) using the native PDF viewer, I did not have problems refreshing files using Acrobat Reader V. 7.0 and I do not appear to have problems refreshing PDF files using Acrobat Reader V. 8.0 when I am outside the secure area or on other web sites.
    I have replaced the Acrobat Reader plug-in, manually removed all remnants of previous versions of Acrobat Reader and reinstalled V. 8.0. and I also upgraded from Mac OS 10.4.8 to 10.4 9, all to no avail.
    I can clear the Safari cache files as a temporary solution, but I encounter the same difficulties with the “problematic” files once they have been opened again.
    I can also download the PDF files to my hard drive and refresh them multiple times within Safari and also using Acrobat Reader V. 8.0
    The software personnel at Value Line Publishing have investigated their system and have referred me back to Apple. Has anyone heard of similar problems associated with Adobe Acrobat Reader V. 8.0 used in conjunction with Safari 2.0.4?
    PowerBook G4   Mac OS X (10.3.9)  

    Hi Rodney
    Welcome to Apple Discussions
    This sounds like one of those "oddities", contributed to by a few sources.
    I can clear the Safari cache files as a temporary solution, but I encounter the same difficulties with the “problematic” files once they have been opened again.
    Wondering if you disabled the Safari Cache would the refresh function work correctly? As a test you can disable the Safari Cache by Emptying the Cache first via the Safari menu, then Quit Safari. Now go to the Finder>Your User Library>Caches>Safari. Single click on the Safari folder, then Apple Key + I to open Info panel. There, check the "locked" box. This prevents further additions to the cache. The downside, you lose your ability to upload images etc. within Safari (my cache is disabled, so I use Firefox for the uploads).
    Then restart Safari. Try the PDF from within Safari.
    Post back

  • Problems with fast web view and really large PDF files.

    If "allow speculative downloads in background" is set then the Fast Web view becomes nonfunctional.  Reader is makeing the HTTP Byte range requests but will ask for almost the entire PDF file in one range (180 MB of the 182 MB file).
    If "allow speculative downloads in background" is not set then Fast Web view is working fine, Reader is making more HTTP Byte range requests but the larges ones are still small 20K or so.
    Is this behavior being experianced by anyone else?
    What are the disadvantages, if any, of having "allow speculative downloads in background" turned off?
    My environment:
    182 MB pdf file (Optimized) around 41,000 pages.  I have also seen this with 42 MB and 25 MB pdf files.
    IE 6
    Reader 9.2
    Apache webservers (On Solaris Apache 1.3.20 and Linux Apache 2.0)
    Thanks in advance
    Angus Laidlaw

    Hi,
    The signing tool use Capicom (Library from .NET) to generate attached digital signature. I also tried signing with .net class SignedCms but the result is the same. What I don't understand is why some PDFs are working and others are not.
    For example, I uploaded here a signed PDF which can be opened up succesfully: http://dl.dropbox.com/u/104591126/PDFs/Signed_Dummy1.pdf  (it is the same PDF but I didn't enable the fast web view)
    Can you suggest other signing method?
    Thank you.

Maybe you are looking for

  • Connecting external display to PreAluminum MacBook

    My MacBook does not recognize any external display that I connect via a USB port. What am I doing wrong? Is there another port connection I should be using? My MacBook is the pre aluminum model so it may have different ports than the new generation a

  • Install kernel .27?

    I'm about to install Arch x86_64 on my laptop, but thought I would go for the latest kernel since it support my grahic and wireless card. Is it possible to install a preconfigured arch cd with this, else how can I install it after I've installed Arch

  • Scheduled Payment for Telstra Account using a credit card

    Can you please add a future payment date to your website. Many web sites have this simple function and yours used to. It makes it much more coenvenient to schedule the correct payement date and avoid's the late fees when I forget. (unless this is the

  • Java 1.5.0.2 web start fix?

    1. My reading of this 4 yerar old bug: http://forums.java.net/jive/thread.jspa?threadID=315&tstart=0 lead me to belive that word smithing would be done to improve the scarry screen for the sofware publishers that are certified. IT WAS NOT, SAME SCARY

  • How to create Messages with Value Bindging

    Hello, i have an custom component and want to show all message for this component: <h:column> <t:messages binding="#{FolgeterminMessages.messages}" globalOnly="false"></t:messages>