Fast Binary File Saving

I need help.
In my application, I am saving in a binary data file at every 75 ms, around 300 kilobytes of I16 data over a period of 20 seconds.
This gives me a file of roughly 80 Meg.
I am using that method because it is the fastest one that I know. I can not use building up array method with shift register because it become, after a while, to slow to deallocate and reallocate memory.
My method as some rattles because sometime I miss few frames.
Is there a better method? (RAM Disk or something else?)
Harold Hebert
National Research Council Canada

I've gone a lot faster than that. It can be done. I did 100
kSamples/sec (400 kB/sec) back in 1998 w/ Ultra SCSI HD's and as dual
pentium pro 200 MHz in LV. I did 6.4 MB/sec in C++ (100 64K frames
per sec) in 2000 w/ a 450 MHz P3 and Ultra 2 SCSI H/D's. Here's what
you do:
Separate the file I/O and the data acquisition into separate while
loops and use a queue to send acquired data to the file i/o loop.
This way the acquired data loop can go as fast as it can, the file
loop can go as fast as it can and they won't make each other wait.
The data will be queued up by the O/S into the queue until it can be
written later. This is how to do asynchronous file writes in LV.
Write data in exactly 64K chunks (65536 bytes). If your frames are
less than this or not multiples of this, accumulate until >=64K, write
the first 64K and stick rest back into buffer to be combined with the
next frame. Write the remaining non 64K fragment once at the end.
64K is the best size for DMA moves from my experience and anything
else is less efficient.
Alternatively if you know that 80 MB is your limit, just create a 40M
element array and then put your data into it as you acquire it and
then write the data to disk at the end, post test.
Stuff your PC with as much ram as the BIOS can handle. RAM is cheap!!
Get rid of unneeded apps, services, and protocols they all eat CPU
time. Especially virus and firewall which try to check files as you
write them.
Get a faster PC w/dual processors.
Get a RAID array w/dedicated RAID controller, ULTRA2 160 SCSI 10K LVDT
H/D's certified for video applications (no thermal recals), get a lot
of buffer ram for the RAID controller. This will dramatically improve
your disk bandwidth, seek times, and reduce your i/o overhead on the
CPU.
Don't believe the H/D mfg's performance figures. Typically can't do
better than 10% of these rates on a continuous basis. (People who
sell RAID arrays generally give you the truth though as opposed to
consumer outlet shrink wrapped H/D's.)
Rewrite it in C or C++, it will be tons faster.
Douglas De Clue
LabVIEW developer/C++ developer (currently job hunting...)
[email protected]
Harold Hebert at NRC wrote in message news:<[email protected]>...
> I need help.
> In my application, I am saving in a binary data file at every 75 ms,
> around 300 kilobytes of I16 data over a period of 20 seconds.
> This gives me a file of roughly 80 Meg.
> I am using that method because it is the fastest one that I know. I
> can not use building up array method with shift register because it
> become, after a while, to slow to deallocate and reallocate memory.
>
> My method as some rattles because sometime I miss few frames.
> Is there a better method? (RAM Disk or something else?)

Similar Messages

  • How to display binary file saved in BLOB column in Discoverer PLUS /VIEWER

    HI, Friends,
    I tried to display the binary file saved in the database in BLOB column with the *.doc or *.xls fomats, but it seemed not work at all. I can display them in discoverer DESKTOP, but not the web version PLUS or VIEWER. MY Discoverer PLUS/VIEWER version is 9.0.4.45.02 and the database is 9.2.0.4. Are there any special setting/configuration in somewhere (Discoverer administration/Applicaiton server/PLUS/VIEWER?)
    Any help /information will be greatly appreciated.
    Thanks a lot.

    Hi,
    Sorry but this feature is not available in the web version of 9.0.4.45. You will have to write your own mod_plsql function to download the file.
    Rod West

  • Binary file save - bug in lv8 for MAC ?

    Hi,
    Using LV8 on Mac OSX, I found a bug concerning binary file saving (attached file).
    Write an array of double in a binary file. Read it back.
    If you used little endian,ok. If you used Big-endian, result is wrong (but no error).
    Could a mac user replicate ?
    Boris Matrot
    Attachments:
    bin_file_save_lv8_mac.vi ‏14 KB

    Just as an additional data point, everything works fine under Windows.
    (As a workaround, have you tried flattening the data for writing? I don't have a MAC, so I cannot test.)
    Message Edited by altenbach on 01-19-2006 12:41 PM
    LabVIEW Champion . Do more with less code and in less time .

  • Problem saving 2D array to binary file

    I am getting some strange results when trying to write a transposed 2D array of Unisgned 8-bit data to a binary file. Attached is an example program ("071026_ArraySave_Bug.vi") which demonstrates the suspicious behavior.
    The program generates a small 2D array of U8 data, then saves it to two temporary files using two different methods.
    1) Row-by-row: The 2D array is auto-indexed, and the indexed row is written to file at once (with the prepended size option disabled).
    2) Point-by-point: Two For loops are used to auto-index each element and write it to file individually.
    The saved files are then reopened and the data is read into two 1D arrays for display.
    The bug occurs when the generated 2D array is transposed before being saved to file. In the point-by-point method, the data is always saved to the file as desired, i.e. by row or by column depending on whether the data was transposed. However, in the row-by-row method the data is incorrectly saved when the input data is transposed. Without transposing the array, the row-by-row results are correct.
    Saving the array row-by-row is significantly faster for larger arrays, so this method is desired.
    Note that the option to transpose the input data in the attached VI is performed using a case structure with a boolean constant ("Bug Control"). This is important because:
    1) The bug does not occur if the boolean constant is replaced with a control.
    2) The bug does not occur if the case structure is replaced with a "Select" node from the Comparison Palette.
    3) The bug still occurs if the case structure is removed such that the array is always transposed (i.e. keep the True case).
    For now, I have been able to circumvent the bug in practice by transposing the array and then passing it through a Select with a constant True (the False case uses the untransposed array). This allows the faster row-by-row method to be used without error, but requires new memory allocation for the transposed array.
    I will appreciate any help that can be given regarding this problem, or simply confirmation that it is indeed a bug in Labview 8.5. It is worth noting that the same program did not produce any errors when run in Labview 7.
    Thank you for your help,
    David Viggiano
    Attachments:
    071026_ArraySave_Bug.vi ‏29 KB

    This is a known bug (CAR : 4DP855N3) , related to memory management. see this thread
    If I remember well, Altenbach shown somewhere that forcing the transpose operation to produce a copy of the memory was a proper workaround.
    Message Edité par chilly charly le 10-26-2007 08:40 PM
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        
    Attachments:
    071026_ArraySave_Bug[1].png ‏2 KB

  • Fast processing of mixed representation binary files

    Hi Everyone,
    I see multiple ways of tackling this but I'm looking for the fastest approach as my data set is very large....
    The issue:
    I have a binary data file holding 2D data.
    It encodes 200+ differnet "columns" that are repeated in time (sampled)
    The data contains mixed data representaitons: a mxiture of  U8, I8, U16, I16 etc.
    They are all regullarly repeated in a known file structure (660 bytes per "line")
    I'd like to gnereate the 200+ differnet 1-D arrays form the file each using the correct data representaiton (or a sub-set of the columns).
    I can load the file in using binary file read and I specify U8 as the data type . I can then rediension to the correct 2D array.
    I'm now stuck on the fastest method to process the columns of data (1-2 bytes wide) into the corect numeric representaiton 1-D arrays (2x U8 to i16 etc.).
    Scanning byte by byte would be very slow.
    any suggesitons?
    Solved!
    Go to Solution.

    Taking wrote:
    Your sequence structures are unnecessary.
    I'd bet on Join Number being faster than any string functions.
    What sort of mechanism are you going to use for column definitions?
    Do you care if 8 bit datatypes get upcast to 16 bit?
    Agreed, the sequence structures only present to aid illustration.
    I'm not sure flatten to string is a "true" (slow) string function. I'm viewing it more as a container of bytes. I'm going to run some speed tests. The array massaging that has to go on to use the join function may be a large overhead.
    Column definitions will be sourced from a secondary text file that describes the file structure. The example conversion of 2 x  u8 to i16 would be replaced by a for loop (over all columns of 1-N bytes) and case structure (representation of the column) that processes the data. Ultimately each 1D array of correctly converted data will be saved off to it's own binary data file in a appropriate numeric representation within the case statement.
    Next I will be looking what for loop paralization I can achieve vs. source array memory copies. (again it is a very big source data file).

  • Word saving documents as binary files- can't send documents as attachments

    I've had my Macbook for about 9 months. During the first 6 months, I could send attachments via email perfectly fine. After about 6 months, something happened and the files could no longer be opened after being attached. I tried sending some attachments to myself and found out that the documents were being sent as binary files.
    I also backed up my documents onto an external hard drive. When I opened the documents on a Windows computer, the Word documents that I created during the first 6 months could be opened. However, the most recently created documents couldn't be opened by my computer.
    I decided to do an erase and install on Saturday. Afterwards, Word was working fine and I was able to send documents as attachments. This morning (Tuesday) I tried to send attachments again and it didn't work! Again, the documents were binary files.
    I'm not sure what to do because it was working fine for three days, but now it's not working anymore. I think something must have happened during those three days that corrupted my computer?
    Has anyone had a similar problem? Can anyone help?
    Sorry I know it was a long post but I wanted to give more information in case it would give some clues. I feel I should add that I've been having quite a few problems with my Macbook, e.g. screen freezing, DMGs not loading properly, getting some pop ups
    Message was edited by: MissIndecisive
    Also wanted to add that I've had my logicboard replaced recently because of problems such as freezing, so I don't think it should be a hardware problem.

    Since Word is not an Apple product, you'll get better response if you use a forum dedicated to Microsoft's Mac products such as <http://groups.google.com/groups/dir?sel=33607053> rather than an Apple forum that focuses on compatibility between Macs and Windows.
    Be sure to search the forum first in case someone has already had a similar question answered. You'll get your answer faster this way. Post your question in the forum if you don't find anything that helps you

  • Saving ResultList as binary file

    Hi,
    I like to store my ResultList as binary on file drive. The aim is to safe drive space and execution time for creating the report (XML or ASCII) for each UUT.
    At the moment i had nothing figured out doing such stuff in TS.
    Thats the reason why I started this thread to discuss with you what is possible or impossible.
    Do not care about the programming language. LV,CVI,C++,C# or anything else is welcome.
    My idea sounds simple
    1.) Creating : Take the address space of the ResultList and save it to disk. Problem: Size of Buffer !!
    2.) Consuming: Take the Binary file and put it back to TS and let a the TestReport (XML, ASCII or whatever you want) Callback running.
    Greetings from the lake of Constance, Germany
    Juergen
    =s=i=g=n=a=t=u=r=e= Click on the Star and see what happens :-) =s=i=g=n=a=t=u=r=e=

    Hi Johann,
    Thanks for your answer. But unfortunately it pointed not out what I really want to know. Or lets say what i want to discuss.
    The major task is: Is there a way to safe a the complete ResultList (if you take a look into the TestReport-Callback it will be variable "Parameters.MainSequenceResults") as a binary-file without any modifiactions or generator/parser stuff.
    The first aim of this procedure should be no loss of data by saving huge number of file drive space. The Second is saving execution time for generating the report file in the Testreport callback.
    The last few "lunch-breaks" I have dealed with this question to figure out a useful solution.
    I have found several things in TS4.0 that deal with this topic the serialization of data
    One the one hand there is a Engine method "SerializeObjects" . I got this running (I hoped so) for binary data but for re-building
    just for visualizing the data to an Operator. I found no running solution. On the other hand there is PropertyObject method
    "Serialize" This stuff was working fine. But data always have been stored in INI-Format on file. So there was no great benefit
    on file size.
    While looking on the TS4.0 poster on the wall i was focused on PropertyObjectFile. And with it found a resonable solution for this task. It is simple. Create a  PropertyObjectFile, do some settings like binary and path, get the parent PropertyObject from it, and ceate a new varaiable in the data folder. Now take the ResultList, clone and add it to your variable. The last step is saving.
    A comparsion to the XML-Report (shipped with NI)
    Binary  <-->  XML
    Filesize: 23 kB  <--> 2202 kB  that is a reduction of filedrive space of over 95% !!!
    Executiontime 160ms <--> 1960ms  that means i have to spent only 8% of time as in XML
    I have attached an example. It writes the report on Root C:\report.dat
    For you and all other members feel free to test it and please tell me what you think about it.
    If there is an other solution so lets discuss it.
    Greetings 
    juergen
    =s=i=g=n=a=t=u=r=e= Click on the Star and see what happens :-) =s=i=g=n=a=t=u=r=e=
    Attachments:
    ResultListAsBinary.seq ‏12 KB

  • DAQ vi to perform digital write and read measurements using 32 bits binary data saved in a file

    Hi
    DAQ vi to perform digital write and read measurements using 32 bits binary data saved in a file
    Two main
    sections:
    1)     
    Perform
    write and read operations to and fro different spread sheet files, such that
    each file have a single row of 32bits different binary data (analogous to 1D
    array) where the left most bit is the MSB. I don’t want to manually enter the
    32 bits binary data, I want the data written or read just by opening a file
    name saves with the intended data.
          2)     
    And
    by using test patterns implemented using the digital pattern generator or  build digital data functions or otherwise, I need to
    ensure that the     
                binary data written to a spreadsheet file or any supported file type
    then through the NI-USB 6509 is same as the data read.
    I’m aware I can’t use the simulated
    device to read data written to any port but if the write part of the vi works I
    ‘m sure the read part will work on the physical device which I’ll buy later.
    My Plan
    of action
    I’ve
    created a basic write/read file task and a write/read DAQ task for NI USB 6509
    and both combine in a while loop to form a progress VI which I’m confuse of how
    to proceed with the implementation.
    My
    greatest problem is to link both together with the correct functions or operators
    such that there are no syntax/execution errors and thus achieve my intended
    result.
    This
    project is one of my many assignments for my master thesis, so please i’ll
    appreciate every help as I’m not really efficient with LabVIEW programming but
    I prefer it because is fun and interesting if I get to know it.
    Currently I’m
    practicing with LabVIEW 8.6/NI DAQmx 8.8 Demo versions and NI USB 6509
    simulated device.
    Please see
    the attached file for my novice progress, thanks in
    advance for the support
    Rgds
    Paul
    Attachments:
    DIO_write_read DAQ from file.vi ‏17 KB

    What does your file look like?  The DAQmx write is expecting a single U32 value, not an array of I64. 
    Message Edited by vt92 on 09-16-2009 02:42 PM
    "There is a God shaped vacuum in the heart of every man which cannot be filled by any created thing, but only by God, the Creator, made known through Jesus." - Blaise Pascal

  • When I do a download, it is saved to a Binary File. Where are these files stored & how do you uninstall a program you don't want to keep ?

    I use firefox, when I download a file from another website, there is a pop-up that shows the name of the download, which shows it is saved to a binary file, (don't know what a binary file is) but my question is where are the binary files stored on my computer & if I can uninstall it if I don't want to keep it.

    read basic about svchost:
    [http://support.microsoft.com/kb/314056/en-us A description of Svchost.exe in Windows XP Professional Edition]
    find svchost services:
    [http://webcache.googleusercontent.com/search?q=cache:pa9PdGlHr0sJ:www.bleepingcomputer.com/tutorials/list-services-running-under-svchost.exe-process/+what+is+svchost.exe&cd=12&hl=el&ct=clnk&gl=gr a way how to determine what services are running under a SVCHOST.EXE process]
    One Temporary Solution is to disable the Windows Automatic Update service:
    http://ask-leo.com/how_do_i_fix_this_high_cpu_usage_svchost_virus_or_whatever_it_is.html
    (no '''it is not''' a virus)
    (works for me)
    thank you
    Please mark "Solved" the answer that really solve the problem, to help others with a similar problem.

  • Every time i download anything, it is saved as a binary file. how do i change this setting?

    every time i download anything, it is saved as a binary file. how do i change this setting?

    Here is a simple fix I found that worked for me.
    1) Clear your download history
    2) Go to Options > Options > General Tab: Change your download location to something other than what you currently have it set at. Apply the changes.
    3) Go back into the Options and change it back to what it was or whatever you want it to be.
    You should be all set from there.

  • Read image saved in binary file in Matlab

    Hello,
    I save images in binary file, in one file can be thousands of images, and I would like to read it in Matlab or in some other software.
    So I would like to know if it's posssible and if so I would like to know what exactly I save, when I save image, and how image structure look like.
    Thanks Eva

    Hi Eva,
    I have little knowledge of IMAQ, so I don't know your data type... sorry.
    But in general you read the data in MatLAB. You save the image (only 1) 100 times in the same file. Each image is prepended by a I32 with the total byte count of the image. So that is one pointer for you.
    What you could do is convert your IMAQ image into a LV picture control (this must be possible) and then convert it into a U32 2D-dbl and read this with Matlab.
    Success Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • The first binary file write operation for a new file takes progressively longer.

    I have an application in which I am acquiring analog data from multiple
    PXI-6031E DAQ boards and then writing that data to FireWire hard disks
    over an extended time period (14 days).  I am using a PXI-8145RT
    controller, a PXI-8252 FireWire interface board and compatible FireWire
    hard drive enclosures.  When I start acquiring data to an empty
    hard disk, creating files on the fly as well as the actual file I/O
    operations are both very quick.  As the number of files on the
    hard drive increases, it begins to take considerably longer to complete
    the first write to a new binary file.  After the first write,
    subsequent writes of the same data size to that same file are very
    fast.  It is only the first write operation to a new file that
    takes progressively longer.  To clarify, it currently takes 1 to 2
    milliseconds to complete the first binary write of a new file when the
    hard drive is almost empty.  After writing 32, 150 MByte files,
    the first binary write to file 33 takes about 5 seconds!  This
    behavior is repeatable and continues to get worse as the number of
    files increases.  I am using the FAT32 file system, required for
    the Real-Time controller, and 80GB laptop hard drives.   The
    system works flawlessly until asked to create a new file and write the
    first set of binary data to that file.  I am forced to buffer lots
    of data from the DAQ boards while the system hangs at this point. 
    The requirements for this data acquisition system do not allow for a
    single data file so I can not simply write to one large file.  
    Any help or suggestions as to why I am seeing this behavior would be
    greatly appreciated.

    I am experiencing the same problem. Our program periodically monitors data and eventually save it for post-processing. While it's searching for suitable data, it creates one file for every channel (32 in total) and starts streaming data to these files. If it finds data is not suitable, it deletes the files and creates new ones.
    On our lab, we tested the program on windows and then on RT and we did not find any problems.
    Unfortunately when it was time to install the PXI on field (an electromechanic shovel on a copper mine) and test it, we've come to find that saving was taking to long and the program screwed up. Specifically when creating files (I.E. "New File" function). It could take 5 or more seconds to create a single file.
    As you can see, field startup failed and we will have to modify our programs to workaround this problem and return next week to try again, with the additional time and cost involved. Not to talk about the bad image we are giving to our costumer.
    I really like labview, but I am particularly upset beacuse of this problem. LV RT is supposed to run as if it was LV win32, with the obvious and expected differences, but a developer can not expect things like this to happen. I remember a few months ago I had another problem: on RT Time/Date function gives a wrong value as your program runs, when using timed loops. Can you expect something like that when evaluating your development platform? Fortunately, we found the problem before giving the system to our costumer and there was a relatively easy workaround. Unfortunately, now we had to hit the wall to find the problem.
    On this particular problem I also found that it gets worse when there are more files on the directory. Create a new dir every N hours? I really think that's not a solution. I would not expect this answer from NI.
    I would really appreciate someone from NI to give us a technical explanation about why this problem happens and not just "trial and error" "solutions".
    By the way, we are using a PXI RT controller with the solid-state drive option.
    Thank you.
    Daniel R.
    Message Edited by Daniel_Chile on 06-29-2006 03:05 PM

  • How to read a certain range from binary file

    Dear all LabView experts!
    I've attempted to acquire data as fast as I can from TDS 2024 via GPIB interface. The data will be saved into one binary file and be used for later analysis.
    The program that is included here will perform continuous acquiring and save all the points into 1 file.
    I used WHILE loop to do this job and each loop will take 2500 points from the scope.
    After that, I would like to read the file such that every 2500 points will be read out and analyzed.
    can you please show me how to do this?
    I've acknowleged that 2 inputs: pos. mod and pos.offset should be able to provide random access to the binary at any point. However, I cannot have this done
    THank you very much
    Attachments:
    streaming-testing1.vi ‏66 KB

    Hi CSUEB,
          Thanks for your patience - and for the sample data.
    Here's a Reader VI that will work (as long as the byte-stream type length is 20004 elements.)
    Cheers!
    "Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
    Attachments:
    streaming-testing_Reader_6i.vi ‏71 KB

  • How to organize variables for file saving and scalability?

    Hello,
    I have created several CVI applications that store production data for numerous machines.  To organize the data for file saving I have implemented structures.  This has worked well with one limitation, the inability to scale the structure at a later date without invalidating existing files.  I would like to consider alternative approaches that would allow scalability.
    Here's an example of my current method...
    // Definition of structure per machine.
    struct machine_1
       int  int_param_1, int_param_2, int_param_3;
       double  dbl_param_1, dbl_param2, dbl_param3;
    struct machine_2
       int int_param_1, int_param_2, int_param_3;
       double dbl_param_1, dbl_param2, dbl_param3;
    // Definition of inclusive structure. (Member name and structure tag name are the same.)
    struct
       struct machine_1   machine_1;
       struct machine_2   machine_2;
    } machine_parameters;
    To assign a value to a structure variable:
    // Assign value.
    machine_parameters.machine_1. dbl_param_2 = 77.47;
    Then when it comes time to save the populated structures:
    // Save structure.
    error = fwrite (&machine_parameters, sizeof(machine_parameters), 1, dest);
    The problem comes later when multiple files already exist and one of the machine structures needs an additional variable added.  For example, if I need to add int_param_4 to the machine_1 structure.  Adding this variable will invalidate the previously saved files because they were saved with a different structure and will not be able to be opened with a new structure containing one additional variable due to the structure definition mismatch.
    I have added spare variables per data type to the structures for each machine, but it's a losing game.  If I add 10 spare variables, I end up needing to store 11 more pieces of data.
    Is there a better approach?
    Thanks,
    Aaron T.

    One simple way is to output the data as ASCII comma separated values, with a newline character at the end of each row of data.
    I.E., the only structure to your file data is a "row" of CSV's, with the file containing some number of rows.
    Then, you load the data into Excel, and it will parse the CSV's for you and when it sees the newline, put the next set of CSV's on the next row of the worksheet.
    If you ever need to expand the number of items in a row, you just add them as you generate data, pushing the newline to the right, the extra data  extending the row.
    So you get an Excel worksheet filled with rows of (possibly varying length) data.   So long as you add data at the end of the row when you redefine what you're saving, anything reading the file should see the same stuff that was always there.
    You can write a macro to reformat or parse the CSV's once they're in the spreadsheet.  With Excel 2007 supporting very large worksheets, you can put a lot of data into one.  I think they expose a C interface for writing fast data manipulation of cell data now too - sort of a fast macro from the Excel viewpoint.  I think the number of columns is 16384 and 1 million rows in a "Big Grid". The Excel 2007 engine is multi-threaded and you can tell it how many cores to use on a multicore machine.
    So the only problem I see is the loss of local structure (your C structs get serialized and get concatenated to one another) but you could re-introduce the structure with a macro.
    Or, if you were to write out serialized binary values and then view the file data using a hex editor like Neo, you can tell Neo what your C structs were and it will pick up binary file data and put it back into the C structs for viewing.
    Or use MatLab to read the CSV's and reformat it.
    Or use the CVI SQL interface and write it out as database records.  I think the SQL toolkit costs extra, maybe it comes with the FDS. 
    Menchar

  • How can I read a binary file stream with many data type, as with AcqKnowledge physio binary data file?

    I would like to read in and write physiological data files which were saved by BioPac�s AcqKnowledge 3.8.1 software, in conjunction with their MP150 acquisition system. To start with, I�d like to write a converter from different physiodata file format into the AcqKnowledge binary file format for version 3.5 � 3.7 (including 3.7.3). It will allow us to read different file format into an analysis package which can only read in file written by AcqKnowledge version 3.5 � 3.7 (including 3.7.3).
    I attempted to write a reader following the Application Note AS156 entitled �AcqKnowledge File Format for PC with Windows� (see http://biopac.com/AppNotes/ app156Fi
    leFormat/FileFormat.htm ). Note the link for the Mac File format is very instructive too - it is presented in a different style and might make sense to some people with C library like look (http://biopac.com/AppNotes/ app155macffmt/macff.htm).
    I guess the problem I had was that I could not manage to read all the different byte data stream with File.vi. This is easy in C but I did not get very far in LabView 7.0. Also, I was a little unsure which LabView data types correspond to int, char , short, long, double, byte, RGB and Rect. And, since it is for PC I am also assuming the data to be written as �little endian� integer, and thus I also used byte swap vi.
    Two samples *.acq binary files are attach to this post to the list. Demo.acq is for version 3.7-3.7.2, while SCR_EKGtest1b.acq was recorded and saved with AcqKnowledge 3.8.1, which version number is 41.
    I would be grateful if you someone could explain how to handle such binary file stream with LabView and send an example to i
    llustrate it.
    Many thanks in advance for your help.
    Donat-Pierre
    Attachments:
    Demo.acq ‏248 KB
    SCR_EKG_test1b.acq ‏97 KB

    The reading of double is also straight forward : just use a dble float wired to the type cast node, after inverting the string (indian conversion).
    See the attached example.
    The measure of skin thickness is based on OCT (optical coherent tomography = interferometry) : an optical fiber system send and received light emitted to/back from the skin at a few centimeter distance. A profile of skin structure is then computed from the optical signal.
    CC
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        
    Attachments:
    Read_AK_time_info.vi.zip ‏9 KB

Maybe you are looking for

  • Why can't i see a shared library on my iphone when the computer is ethernet connected instead of wifi ?

    Why can't i see on my iphone a shared library from a computer ethernet connected instead of wifi ?

  • Why Firefox not support to download any video or audio only play online?

    you can see uc browser or opera for android these are smaller but they have options for online or save for offline but Firefox it is to big app but it has not options. why? you know if l want to save any video or audio your browser send me in play st

  • Java source file compare

    Hi @ all! I am looking for a tool which can compare two java files and tell me if they are differ or not. Have a look at the example below: These two files are NOT differ, because they do the same. public class Hello{   public static void main(String

  • Bilder Downloader stürzt ab

    Nach der Installation des neuen Druckertreibers für meinen Dell 962, der über die Windows Update Seite von Microsoft verteilt wird, stürzt der Bilder Downloader zuverlässig ab. Das Übertragen von Bildern vom Fotoapparat mit Photoshop Elements 5.0 ist

  • Monitoring dates in Project system

    Hi! We need to monitor the dates related with the components of the project. We have never use the tool "Monitoring Dates", I do not understand how to use this functionality. Any body can give me an example of how to use "Monitoring Dates" in PS?. Th