Transformi​ng large data arrays

Hi,
I believe this is quite a simple question but I am trying to find the most efficient way of doing this, currently I have acquired multi channel binary data in files that that can be upto and above 1GB. The data is stored in one file in the order (say when acquiring 3 channels) channel 1, channel 2, channel 3, channel 1..... and so on. I need to then convert this data into a spreadsheet file, .txt is fine and also transform it into voltages and reorientate it so the new file would be a 2d array in the form:
channel 1 channel 2 channel 3
channel 1 channel 2 channel 3.......... and so on
Currently I do this very simply by reading the I16 binary data, changing it to voltages by multiply by 10/32768 (I work with the range of 10v to -10v and the binary is 16bit) decimating the 1d array and building it to a 2d array and saving this.
The problem is when doing this to large files the system runs out of memory, I was wondering if there is a way just to part of the file at a time instead of all at the same time and just appending it to the saved file?
Thanks
Charlie

Hi raj,
here's a picture (worth a thousand words ):
Functions look different in LV7.1 (from left to right): Open file, read text file, write text file, close file.
And you should note your LabView version when you ask for example code!
Message Edited by GerdW on 01-24-2008 01:30 PM
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome
Attachments:
readwrite_ex.png ‏4 KB

Similar Messages

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • How can I get the data array from SQL Server Database?

    Hi,
    I can write a data array(2D)into a table of my SQL Server Database. The data array was writen to a column with image type. I know a data array is transformed a binary string when writing into database, but I dont know how to get the data array when I fetch the binary string from database.
    My question is:
    How to transform the binary string into data array? which vi's should I use? I have tried unflatten from string but failed.
    Any response is appriciated.
    Red

    happyxh0518 wrote:
    > I can write a data array(2D)into a table of my SQL Server Database.
    > The data array was writen to a column with image type. I know a data
    > array is transformed a binary string when writing into database, but I
    > dont know how to get the data array when I fetch the binary string
    > from database.
    >
    > My question is:
    > How to transform the binary string into data array? which vi's should
    > I use? I have tried unflatten from string but failed.
    In order to use Unflatten from string you first need to Flatten it
    before writing it. Also depending on the database driver, the returned
    data may actually not be binary but Hexadecimal encoded ASCII which you
    would first have to decode to binray.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Passing a large 3D-array to a sub vi and back causes huge loss of performance

    I am passing a large 3D-Array of type I16 to a sub-vi node of same type, and back to the main VI. For test purposes I have removed all processing in between. Still, the sub-vi call requires many ms to s, depending on array size. I was under the impression that variables are passed to sub-VIs by reference. Is this incorrect? can it be fixed?
    Johannes

    Hi Johannes,
    (There may be exceptions but) No, variables are not passed and returned by reference.
    "A fix" may be possible by restructuring your app such that the large 3d array never gets moved around. This can be done by using what I call an action engine. It uses a variation of the LV2 global to store the large array ONE TIME in a shift register. After that, all other operations are performed "in-place".
    This technique has allowed me to process very large data sets "on-the-fly".
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Large string array in 6.1 is extremely slow

    Good day all,
    While this is in to tech support at NI, I wanted to see if anyone else has encountered it.
    I am upgrading from 6.0.2 to 6.1. Several large (2500 rows by 250 columns or larger) string arrays are used as inputs into subvi's. Under 6.0.2, these functions run in tenths of seconds, while under the converted 6.1 vi's they run in 20 seconds or more!
    Tracing back using probes, the problem is occurring at the point of the input. It is appears that the array is taking many seconds to copy from the input to the wire on the diagram.
    Array controls generated in 6.1 (not converted from 6.0.2) seem to function just fine. Using a save with options... to convert back to 6.0.2, the vi's again function in tenths of
    seconds.
    Anyone have any ideas?
    Thanks!

    I hear what you're saying about legacy code...
    Something you might want to be looking at for the future is migrating to a structure where the data is stored in a 1D array, where each element is a cluster contain the data that's now in a single row. This would be the most straight-forward change, but could make getting at the data tricky, depending on how you need to be able to search it.
    Alternately, you could have a cluster containing arrays of each of the row values.In this structure element 0 of all the arrays is the first "row", element 1 of the arrays is the second "row" and so on. This structure at first blush looks more complicated, but it's really not, plus it would allow you to use any value (or combination of
    values) to search for a specific row without a lot of parsing.
    If the data that is in the example VIs you posted is typical, either of these changes would be advantagous because it looks like there is a lot of reptative data that might be able to be encoded in an enum. Plus storing numbers as numbers often reduces the memory required and produces a predictable memory footprint (an I32 will always take-up 4 bytes per value regardless of how large of small the number is). My sense is that the variability of the string size is what's killing you.
    One thing that would make this sort of dramatic change somewhat easier is that because you are changing the basic datatype of the interface, you aren't going to have to worry about finding all the places the change will effect--the wires will be broken.
    If you ever decide to take this on, give a hollar.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Spooling large data using UTL_FILE

    Hi Everybody!
    While spooling out data into a file using UTL_FILE package I am unable spool the data The column data has a size of 2531 characters
    The column 'source_where_clause_text' has very large data.
    Its not giving any error but the external table is not returning and data
    Following is the code.
    CREATE OR REPLACE PROCEDURE transformation_utl_file AS
    CURSOR c1 IS
    select transformation_nme,source_where_clause_text
    from utility.data_transformation where transformation_nme='product_closing';
    v_fh UTL_FILE.file_type;
    BEGIN
    v_fh := UTL_FILE.fopen('UTLFILELOAD', 'transformation_data.dat', 'w', 32000);--132767
    FOR ci IN c1
    LOOP
    UTL_FILE.put_line( v_fh, ci.transformation_nme ||'~'|| ci.source_where_clause_text);
    -- UTL_FILE.put_line( v_fh, ci.system_id ||'~'||ci.system_nme ||'~'|| ci.system_desc ||'~'|| ci.date_stamp);
    END LOOP;
    UTL_FILE.fclose( v_fh );
    exception
    when utl_file.invalid_path then dbms_output.put_line('Invalid Path');
    END;
    select length(
    '(select to_char(b.system_id) || to_date(a.period_start_date,''dd-mon-yyyy'') view_key, b.system_id, to_date(a.period_start_date,''dd-mon-yyyy'') period_start_date, to_date(a.period_end_date,''dd-mon-yyyy'') period_end_date, to_date(a.clos
    ing_date,''dd-mon-yyyy'') closing_date from ((select decode(certification_type_code, ''A'', ''IDESK_PRODUCTS_PIPELINE'',''C'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'') system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyy
    yy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsu
    pload.prod_monthly_certification where certification_type_code in (''A'',''C'') minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing
    statusv where system_nme in (''IDESK_PRODUCTS_PIPELINE'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'')) union all (select ''BMS Commitment Link'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'')
    period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_c
    ertification where certification_type_code = ''C'' minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme
    = ''BMS Commitment Link'') union all (select ''BMS'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_
    of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_certification where certification_type_code = ''A'' minus select trim(system_nme), to_char
    (period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme = ''BMS'')) a, utility.system_v b where a.system_nme = b.system_nme)') length1
    from dual
    --2531
    begin
    SSUBRAMANIAN.transformation_utl_file;
    end;
    create table transformation_utl
    TRANSFORMATION_NME VARCHAR2(40),
    SOURCE_WHERE_CLAUSE_TEXT VARCHAR2(4000)
    ORGANIZATION external
    type oracle_loader
    default directory UTLFILELOAD
    ACCESS PARAMETERS
    records delimited by newline CHARACTERSET US7ASCII
    BADFILE UTLFILELOAD:'transformation.bad'
    LOGFILE UTLFILELOAD:'transformation.log'
    fields TERMINATED by "~"
    LOCATION ('transformation_data.dat')
    ) REJECT LIMIT UNLIMITED
    select * from transformation_utl

    after running the procedure, did you verify that the file 'transformation_data.dat' has data? open it, make sure it's correct. maybe it has no data, and that's why the external table doesn't show anything.
    also, check the LOG and BAD files after selecting from the external table. maybe they have ERRORS in them (or all the data is going to BAD because you defined something wrong).

  • Moving large data real-time

    Anyone having an idea for moving a large data set from one core's thread to another's thread, in a Real Time system, where it will be put into a DAQ's output buffer? I have an array ,which at the max is 125625 X 23 X I16, which is built column (the 23 dimension) by column (actually it is done in a replace array element mode), that is then is too be "exported" to the other core's thread. There it is to be loaded into the DAQ cards' memory, to be output. I need to be able to do this in a pretty timely manner. When I try to use a Function Global Variable, that is "loaded" one column at a time, it is _slow_ ! Need a faster method!
    Thanks,
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

    I haven't tried queues for core to core communications before, hadn't considered them, but since some recent revelations have shown that RT lelvel determinism isn't a critical issue I guess that I can examine them as well. Not that some of them necessarily have  jitter issues, but I was being conservative. Will put together a trial test case. Thanks, as usual, for your help Ben. Hope the weather in your neck of PA wasn't too bad. Up in Syracuse the are 400% more snow than this time last year, and last year the season had 140+" (norm is closer to 115), and that was in a very truncated winter, not much until third week of January.
    Hmm, three stars. Wonder what I said that annoyed someone, or underwhelmed them?
    Even weirder is that at the LabVIEW forum level it is showing 4 stars, two voters.  Hmmm, too early for me, the coffee isn't hitting (actually not able to drink coffee lately)
    Message Edited by LV_Pro on 12-17-2007 09:15 AM
    Message Edited by LV_Pro on 12-17-2007 09:19 AM
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • PC gets loaded when trying to display large data in graph

    PC gets loaded when i try to display large data in the graph, the shift register holding the data eats up all my virtual memory ,so my pc gets hangs,any ways to refresh the virtual memory using labview. The chart also cannot be replaced.

    Bharani wrote:
    The data size is appox 200 MB or more. The data is acquired in I32 format and store in file. During the playback , the file is read according to the sampling rate, converted to ascii ,send to Daqmx write and Graph simultaneously. In graph portion, the array holds(using shift register)  all the data in the graph.This holding the data loads the PC.Is there any way to refresh the virtual memory using labview.
    Is there really a good reason to send 200MB worth of I32 data to a graph? NO! Your graph most likely does not have more than about 1000 pixels across!
    Most likely, you have multiple copies if the data in memory. Do you convert the entire 200MB data to ASCII or each data point as needed? Have you done some profiling? What is the memory usage in "VI properties..Memor Usage"? Do you use local variables?
    Your best bet would be to analyse you code to optimize memory usage, avoid data copies, etc. Please attach you code so we can give some advice.
    LabVIEW Champion . Do more with less code and in less time .

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Large data plotting

    I know this is a bit of a loaded question, but hopefully there is some knowledge that the java gurus can share.
    We have a current non-java solaris only implementation of a plotting program that does "Simple point plots". It reads from a file and, it has a calculated axis, colored points, and the ability to zoom. Now the data sets are very large, they can get upwards of 250,000 to 500,000 points with the average being around 30,000 points.
    The solaris plotting that we are trying to mimic is XRT. It can load and handle files that huge, and there are no repainting problems, once it is loaded, there is no noticable repainting probelms.
    The goal now is do it in java, and have it work on Windows also. We choose JClass, since it is the corresponding java implementation of XRT.
    JClass is just to slow. It can load files that big and display the plot, but repainting takes an unuasable amount of time. Maybe even a minute.
    The main question is ... is it going to be possible to get the performance we need out of a Java2D implementation, my guess would be no.
    If you think it can be done (repaint at least under a second) then what improvement suggestions would you offer.
    I you don't than what C based plotting or other solutions could be offered.
    Thanks for the time!!
    P.S. I will add more dukes, if the answers roll in.

    Sure, see below. The only proviso is that I used the latest CVS code, which has changed a bit since the last official release (0.9.13). A new release is expected in the next few days, so if you are not in a hurry I'd wait for that. Alternatively, check out the latest code from CVS at SourceForge:
    http://sourceforge.net/projects/jfreechart
    Anyway, here is the code for the demo app:
    /* ======================================
    * JFreeChart : a free Java chart library
    * ======================================
    * Project Info:  http://www.jfree.org/jfreechart/index.html
    * Project Lead:  David Gilbert ([email protected]);
    * (C) Copyright 2000-2003, by Object Refinery Limited and Contributors.
    * This library is free software; you can redistribute it and/or modify it under the terms
    * of the GNU Lesser General Public License as published by the Free Software Foundation;
    * either version 2.1 of the License, or (at your option) any later version.
    * This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
    * without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    * See the GNU Lesser General Public License for more details.
    * You should have received a copy of the GNU Lesser General Public License along with this
    * library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330,
    * Boston, MA 02111-1307, USA.
    * FastScatterPlotDemo.java
    * (C) Copyright 2002, 2003, by Object Refinery Limited and Contributors.
    * Original Author:  David Gilbert (for Object Refinery Limited);
    * Contributor(s):   -;
    * $Id: FastScatterPlotDemo.java,v 1.4 2003/11/12 12:18:07 mungady Exp $
    * Changes (from 29-Oct-2002)
    * 29-Oct-2002 : Added standard header and Javadocs (DG);
    * 12-Nov-2003 : Enabled zooming (DG);
    package org.jfree.chart.demo;
    import org.jfree.chart.ChartPanel;
    import org.jfree.chart.JFreeChart;
    import org.jfree.chart.axis.NumberAxis;
    import org.jfree.chart.plot.FastScatterPlot;
    import org.jfree.ui.ApplicationFrame;
    import org.jfree.ui.RefineryUtilities;
    * A demo of the fast scatter plot.
    * @author David Gilbert
    public class FastScatterPlotDemo extends ApplicationFrame {
        /** A constant for the number of items in the sample dataset. */
        private static final int COUNT = 500000;
        /** The data. */
        private float[][] data = new float[2][COUNT];
         * Creates a new fast scatter plot demo.
         * @param title  the frame title.
        public FastScatterPlotDemo(String title) {
            super(title);
            populateData();
            NumberAxis domainAxis = new NumberAxis("X");
            domainAxis.setAutoRangeIncludesZero(false);
            NumberAxis rangeAxis = new NumberAxis("Y");
            rangeAxis.setAutoRangeIncludesZero(false);
            FastScatterPlot plot = new FastScatterPlot(data, domainAxis, rangeAxis);
            JFreeChart chart = new JFreeChart("Fast Scatter Plot", plot);
            chart.setLegend(null);
            chart.setAntiAlias(false);
            ChartPanel panel = new ChartPanel(chart, true);
            panel.setPreferredSize(new java.awt.Dimension(500, 270));
            panel.setHorizontalZoom(true);
            panel.setVerticalZoom(true);
            panel.setMinimumDrawHeight(10);
            panel.setMaximumDrawHeight(2000);
            panel.setMinimumDrawWidth(20);
            panel.setMaximumDrawWidth(2000);
            setContentPane(panel);
         * Populates the data array with random values.
        private void populateData() {
            for (int i = 0; i < data[0].length; i++) {
                float x = (float) i + 100000;
                data[0] = x;
    data[1][i] = 100000 + (float) Math.random() * COUNT;
    * Starting point for the demonstration application.
    * @param args ignored.
    public static void main(String[] args) {
    FastScatterPlotDemo demo = new FastScatterPlotDemo("Fast Scatter Plot Demo");
    demo.pack();
    RefineryUtilities.centerFrameOnScreen(demo);
    demo.setVisible(true);
    Regards,
    Dave Gilbert
    JFreeChart Project Leader

  • How can i open large data?

    he code in the screenhot opens realy all data
    formats, but only small data, when the data becomes too large, LabView
    say: "Not enough Memory to complite this operation". But I have enough
    Memory. This appear already by a data-largeness of fwe bytes (maybe
    more than 10 MB).
    I have 1 Gbyte Memory, also enough, but when i
    will open a small data with only 10 MByte it say's to me that i have
    not enough memory.
    Ok when i would open a data with more than
    1Gbyte, for example a zip-file, i can understand it, but the data are
    realy not large.
    Can somebody say waht is wrong?
    ThX
    Attachments:
    Read all files.vi.png ‏12 KB

    You should also read the tutorial Managing Large Data Sets in LabVIEW.  Some things you will learn:
    Your current code is making several copies of your data.  The tutorial will teach you how to find them and eliminate them.  The tutorial has not been updated for LabVIEW 8.5 yet, and there are several enhancements in LabVIEW 8.5.  The updated version is posted below.  The code examples did not change.
    For best speed in reading from the disk, you want to use 65,000 byte chunks.
    Store your data in a single-element queue.  This will give you best performance for a large array.
    Store your data as a set of arrays instead of one array (this has been mentioned above).  You can break it up into several single-element queues or save it as an array of clusters, each cluster containing a sub-array of the data.  The cluster acts sort of like a handle in C.
    Let us know if you need more help.
    This account is no longer active. Contact ShadesOfGray for current posts and information.
    Attachments:
    Memory Management in LabVIEW.doc ‏132 KB

  • Excel with large data read

    Hi all.
    I have a question about the data read of the excel.
    I have 15238 rows* of data 22 column of data in an excel file and LabVIEW need a very long time to read the excel.I don't know if my program is not good enough to manage the large data or it is normal for that.
    Besides, I have tried to convert the excel data to csv data and use text file read method to read but it seems that it also need a very long time to read.
    Actually, is there any better methods to read large amount of data? Does Access a good slution for that?
    Thanks in advance,
    Io
    Attachments:
    read data_excel.JPG ‏62 KB

    Hi,
    Cf attached file : the picture shows what I read from your file. For the date column, it reads only the day, and for the booleans, it doesn't read text.
    So if you want to read all datas, write the date on 3 columns, and write the booleans with numbers (0 or 1) !
    The reason why the vi I sent you doesn't works is that it reads a tabulated file.
    If you have excel 2007, select :
    save as excel 97-2003
    in the type of datas, change excel 97-2003 to text (separator : tabulation) (*.txt)
    and save your file with this type of datas.
    If you close and reopen the file, you will have a warning message, but excel will continue to read the file.
    I join the file that you will have if you follow this method.
    I use this method to read arrays of 60 000 rows *30 columns.
    best regards,
    V-F
    Attachments:
    test001.xls ‏2137 KB
    table.JPG ‏259 KB

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Onboard programming the 7344...creating data arrays?

    I would like to use the onboard programming capability of the 7344.
    I have a requirement to set up a data array of X and Y coordinates that would be accessed via an incremental loop.
    I guess the array data can be entered into a buffer object, but is there anyway of getting the data from the buffer using a pointer. From what I see the data has to be retrieved from the start of the buffer, or sequentially using the read_buffer function, but on certain conditions I may want to access random sets of coordinates, from within the array, and do not want to ue up all the general purpose variables to do this.

    Hi,
    The onboard programming of 7344 supports the FIFO architecture.You can run onboard programs from RAM or optionally save them to flash ROM.
    The 7344 doesnot support the pointer functions(due to memory size).
    The 7344 controllers have 64 KB of RAM and 128 KB of ROM(divided into two 64 KB sectors) for program and object storage.
    You can run programs from either RAM or ROM, but you cannot split programs between the two, and you cannot split programs between the two 64 KB ROMsectors.
    With an average command size of 10 bytes, a single program can be as large as 6,400 commands.
    For example, the 7344 controllers can
    simultaneously execute 10 programs, five from RAM and five from ROM,with each program up to 1,280 commands long.
    Also R
    efer to the Onboard Programming Functions section of the FlexMotion Software Reference Manual for detailed information on all of these onboard programming features.
    Please do post any findings that may seem significant to our discussion on the onboard-programming of the 7344.
    Best Regards
    Atul Wahi
    Applications Engineer
    www.ni.com

Maybe you are looking for

  • How do i sync my itunes music on my computer to my ipod shuffle 2

    how do i syn my itunes music from computer to my ipod shuffle

  • Portal Commerce Templates in WLP 8.1?

    Hi; Just having a first look around the new WebLogic Platform 8.1, I wanted to understand the strategy for developing and deploying e-Commerce websites. It seems clear that it should be possible to host an existing WebLogic Portal 7 SP2 e-Commerce te

  • Photos I did not take appear in my camera roll

    This has happened twice to me this week. The first incident was last Thursday, I was looking through my camera roll (not photostream) and saw three pictures that I did not take. They were photos saved from the internet, as one of them was a 9gag meme

  • Crystal Report Export to PDF with Group Tree Bookmarks

    Dear Forum I have created a Crystal Report aspx page with an export function of the report to pdf. So far all has been successful, except form one little annoying this. The report does export to a pdf file and it does create bookmarks based on the gr

  • Best practice multiuser FCPX workflow on network

    Looking for suggestions on how best to configure a workflow where multiple editors can collaborate on jobs involving Final Cut Pro X projects and events. Using XSAN, NAS, or a mini Lion server with Thunderbold Raids are there any differences in funct