Binary file data plugin : append block values to channel
I need to create a data plugin for a unique binary file structure, which you can see in the attached graphic.
With these objects
Dim sBlock : Set sBlock = sFile.GetBinaryBlock()
sBlock.Position = ? 'value in bytes
sBlock.BlockWidth = ? 'value in bytes
sBlock.BlockLength = ? 'value in bits
I have the possiblity to read chunks from my binary file. At the end, I want to have each signal in a respective channel. I could manage to extract signals 1 and 2, as they only have one value in each block with a known byte-distance in between. How can I extract the other channels, that have 480 successive values in each block? I could probably write a loop and read the specific signal part in each block, but how can I append these parts to the relevant channels? I tried by creating a new channel and then merging them, but unfortunately functions like ChnConcat are not working in a data plugin. Any ideas?
/Phex
PS. Of course I could create a hideous plugin with running GetNextBinaryValue() throught the whole file, but that doesn't seem to be a smart idea for a 2 GB file size.
Attachments:
KRE_DataPlugin_schematic.JPG 84 KB
Phex wrote:
@usac:
Your workaround seems to work at least for part of the file. If I loop it through the whole 1.5 GB file I am getting the DLL error "The application is out of memory". There is enough dedicated RAM available, but I guess this is x86 related.
Hello Phex,
Have you tried running this Script in the DIAdem 64-bit preview version (which gets you access to more than 2 GB or RAM for your application) that can be found at http://www.ni.com/beta and can be installed in parallel to the 32-bit version.
It might get you around the "out of memory" issue ...
Otmar
Otmar D. Foehner
Business Development Manager
DIAdem and Test Data Management
National Instruments
Austin, TX - USA
"For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."
Similar Messages
-
Binary file data plugin : navigating through different data types
Hi
I am currently trying to load a binary file into DIAdem with the following structure:
block 1
1x 8-bit ascii
12x double
block 2
1280x int16
block 3
1280x int16
block 4
12x double
block 5
1280x int16
block 6
1280x int16
block 7
12x double
block 8
1280x int16
block 9
1280x int16
I managed to read the first string value, but I was unsuccessful in getting any further.
Could anyone please give me a lead on how to proceed?
/PhexSorry for revoking this thread, but I need some further advice on two issues:
1. All values that are set to -32768 in the data mean actually NaN. Right now I am converting all those values to NaN in a post-processing script using the CTNV function. Is there a way to do this directly in the DataPlugin? My idea was to apply the NoValueSet command but the bInterfaceLocked variable returned TRUE, so not accessible. No problem if there is no other way, I can live with the CTNV function.
2. When I tested the DataPlugin with larger files (up to 1 GB) I pretty quickly ran into the maximum channel number limitation of DIAdem (available 65k, needed 400k). The only thing I could think of right now is to concatenate all data to a few channels and then extract the relevant blocks needed for the analysis during post processing. But that does not seem very efficient and will surely bring up other problems. Is there a way to work with the DataPlugin interactively, i.e. tell the plugin to read X number of blocks at file position Y? Or is it possible to save loaded data sequentially to different files within the plugin? Increase the number of available channels? Any comments or other ideas are very much welcome.
/Phex -
How to determine binary file data set size
Hi all
I am writing specific sets of array data to a binary file, appending each time so the file grows by one data set for each write operation. I use the set file position function to make sure that I am at the end of the file each time.
When I read the file, I want to read only the last 25 (or some number) data sets. To do this, I figured on using the set file position function to place the file position to where it was 25 data sets from the end. Easy math, right ? Apparently not.
Well, as I have been collecting file size data as I have started the initial tet run, I am finding the the file size (using file size command and getting number of bytes as a result) that the size is not growing the same amount every time. My size and format of my data being written is identical each time, an array of four double precision numbers.
The increments I get are as follows, after first write - 44 bytes, after 2nd - 52 bytes, 3rd - 52 bytes, 4th 44 bytes, 5th - 52 bytes, 6th - 52 bytes, 7th - 44 bytes and it appears to maintain this pattern going forward.
Why would each write operation not be identical in size of bytes. This means that my basic math for determining the correct file poistion to read only the last 25 data sets will not be simple and if somewhere along the line after I have accumulated hundreds or thousands of data sets, what if the pattern changes.
Any help on why this is occuring or on a method of working around the problem would be much appreciated.
Thanks
Doug
Doug
"My only wish is that I am capable of learning each and every day until my last breath."
Solved!
Go to Solution.I have stripped out the DSC module functions from the vi and attached it here. I also set default values to all the inputs so it will run with no other inputs. I also included my current data files (zipped as I have four of them) though the file names are hard coded in the vi so they can be changed to whatever works locally. In fact probably will have to be to modified for the path anyway.
If you point to a path that has no file, it will create a new one on the first run and the file size will show zero since there is no data in it. It will start to show the changes on each subsequent run.
As I am creating and appending four different files, each with it's own set of data but always the same format (array of four double precision numbers) and the file size information always increments the same for all four files (as will be seen in the File Size Array) I don't think it is a function of the size of the acutal numbers but something idiosyncracy with how the binary file is created.
If this proves to be a major hurdle I guess I could try a TDM file but I already got everything else working with this one and need to move on to other tasks.
Thanks for any continued assistance
Doug
Doug
"My only wish is that I am capable of learning each and every day until my last breath."
Attachments:
!_Data_Analysis_Simple.vi 40 KB
SPC.zip 2 KB -
How to output variable names and units used in binary file
My colleague will be giving me binary files (*.dat) generated from within LabView. There are over 60 variables (columns) in the binary output file. I need to know the variable names and units, which I believe he already has set up in LabView. Is there a way for him to output a file containing the variable name and unit, so that I'll know what the binary file contains? He can create an equivalent ASCII file with a header listing the variable name, but it doesn't list the units of each variable.
As you can tell I am not a LabView user so I apologize if this question makes no sense.
Solved!
Go to Solution.Hi KE,
an ASCII file (probably csv formatted) is just text - and contains all data that is (intentially) written to. There is no special function to include units or whatever!
Your collegue has to save those information the same way he saves the names and the values...
(When writing text files he could use WriteTextFile, FormatIntoFile, WriteToSpreadsheetFile, even WriteBinaryFile could be used...)
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Manipulating a Binary File.
I am looking in a binary file that has many blocks of data in this file. There are several headers in the file that don't necessarily start at the top. I need to find where 3,61, and 8000 all happen sequentially. This is the header that starts the block of data that I want to look at. I did a type cast because of the Little Endian Big Endian issue. The data is a one dimensional array of Unsigned 16 bits(two 8 bit words). I would like to put it into a spreadsheet. Thanks in advance.
I may have misunderstood your statement, but it sounds like you're saying that the file can contain multiple data chunks, each of which is preceded by the header 3,61,8000. If so, you can use something like the attached VI to extract out each data chunk and create a concatenated array of data with which you can do whatever you need. It's crude, and I didn't spend a lot of time trying for a really elegant solution, but you get the general idea. The input is the array that is the array of U16 that is the file you read.
Attachments:
ExtractData.vi 56 KB -
"Read from Binary File" and efficiency
For the first time I have tried using Read from Binary File on sizable data files, and I'm seeing some real performance problems. To prevent possible data loss, I write data as I receive them from DAQ, 10 times per second. What I write is a 2-D array of double, 1000 data points x 2-4 channels. When reading in the file, I wish I could read it as a 3-D array, all at once. That doesn't seem supported, so I repeatedly do reads of 2-D array, and use a shift register with Build Array to assemble the 3-D array that I really need. But it's incredibly slow! It seems that I can only read a few hundred of these 2-D items per second.
It also occurred to me that the Build Array being used in a shift register to keep adding on to the array, could be a quadratic-time operation depending on how it is implemented. Continually and repeatedly allocating bigger and bigger chunks of memory, and copying the growing array at every step.
So I'm looking for suggestions on how to efficiently store, efficiently read, and efficiently reassemble my data into the 3-D array that I need. Perhaps I could simplify life if I had "write raw data" and "read raw data" operations that write and read only the numbers and no metadata.then I could write the file and read it back in any size chunks I please -- and read it with other programs besides. But I don't see them in the menus.
Suggestions?
Ken
Solved!
Go to Solution.I quote the detailed help from Read from Binary File:
data type sets the type of data the function uses to read from
the binary file. The function interprets the data starting at the current file
position to be count instances of data type.
If the type is an array, string, or cluster containing an array or string, the
function assumes that each instance of that data type contains size information.
If an instance does not include size information, the function misinterprets the
data. If LabVIEW determines that the data does not match the type, it sets data
to the default for the specified type and returns an error.
So I see how I could write data without any array metadata by turning off "prepend array or string size information", but I don't see any way to read it back in such bare form. If I did, I'd have to tell it how long an array to read, and I don't see where to do that. If I could overcome this, I could indeed read in much larger chunks.
I'll try the auto-indexing tunnel anyway. I didn't tell you the whole truth, the 3-D array is actually further sliced up based on metadata that I carry, and ends up as a 4-D array of "runs". But I can do that after the fact instead of trying to do it with shift registers as I'm reading.
Thanks,
Ken -
How to write a Data Plugin to access a binary file
hi
Im a newbee to DIAdem, i want to develop a data plugin to access a binary file with any number of channels.For example if there around 70 channels, the raw data would in x number of files which will contain may be around 20 channels in each file . Raw file consist of header(one per file), channel sub header(one per channel),Calibration Data Segment(unprocessed datas) and Test data segments(processed data)....
Each of these contains many different fields under them and their size varies ....
Could suggest me some procedure to carry out this task taking into consideration of any number of channels and any number of fields under them....
Expecting your response....
JhonJhon,
I am working on a collection of useful examples and hints for DataPlugin development. This document and the DataPlugin examples are still in a early draft phase. Still I thought it could be helpful for you to look at it.
I have added an example file format which is similar to what you described. It's referred to as Example_1. Let me know whether this is helpful ...
Andreas
Attachments:
Example_1.zip 153 KB -
Add Channel data in blocks through Data Plugin.
Hi,
I have to create a plugin for a binary data file. I have no probelm creating the plugin, as I know all the required formats but now the issue is I am supposed to load channel values of a specific region of interset. For example, If data file has values of data for the time 0 to 13sec, then I am ssupposed to read in channel with data only between 5 to 9 secs. Can anyone please let me know how can I go about for selective data loading through plugin?
Thanks,
PriyaHi Priya,
This is not a built-in feature in DIAdem, although R&D is looking into the feasibility of adding this as a feature at some point. In the meantime I developed my own back-door way of getting the job done. You can pass the reducing information in a text file of the same name (but different extension) as the binary file, then the DataPlugin can read the data reduction information and declare the binary block to start at the correct byte position, skip the correct number of values, and end at the correct last value for the desired interval. Below is a simple example of a DataPlugin outfitted with the "Red" routines.
Let me know if you have further questions,
Brad Turpin
DIAdem Product Support Engineer
National Instruments
Attachments:
GigaLV Red.zip 40 KB -
Writing binary files in block mode?
I am sampling two channels in continuous mode using labview basic version. I would like to take the two arrays of values as I am sampling them and write them out in block mode to a binary file. What's the best way to do that?
Open or create a file for write before daq starts, then continuously write data to the file, and close it after the daq is stopped. Make sure each column of the data array is each channel, so an Array transpose might be needed.
-Joe
Attachments:
Snap8.gif 8 KB -
Data plugin to append to channels in different channel groups
Hello
I try to import particel counter data from a text file which is organized in a difficult way:
Data come in blocks with each block taken at one time. Each block consists of a table of tab-separated data. e.g.
(start of Block 1)
04.06.2013 - 16:34:13 - 10s/10 - m
N analysed: 1521 P Sum(dCn): 5444,470 P/cm³
N total: 1521 P Sum(dCm): 1,7536 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 0 6.155.882 29.521.097 101.420.330 83.226.951 85.545.715 159.120.955 158.057.455
dN/N [-] 0 0 0 0,004048 0,019414 0,066696 0,054732 0,056256 0,104641 0,103941
dN/N/dX [1/µm] 0 0 0 0,305126 1.361.669 4.353.255 3.324.322 3.179.709 5.503.848 5.087.496
dCn [P/cm³] 0 0 0 22.040.393 105.696.730 363.123.274 297.984.071 306.286.127 569.713.408 565.905.674
dCn/Cn [-] 0 0 0 0,004048 0,019414 0,066696 0,054732 0,056256 0,104641 0,103941
dCn/Cn/dX [1/µm] 0 0 0 0,305126 1.361.669 4.353.255 3.324.322 3.179.709 5.503.848 5.087.496
(start of Block 2)
04.06.2013 - 16:34:23 - 10s/10 - m
N analysed: 1071 P Sum(dCn): 3833,350 P/cm³
N total: 1071 P Sum(dCm): 0,3581 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 0 6.981.271 18.451.972 25.599.194 70.809.673 88.966.677 122.491.615 83.363.748
dN/N [-] 0 0 0 0,006521 0,017234 0,02391 0,066137 0,083096 0,114408 0,077862
dN/N/dX [1/µm] 0 0 0 0,491474 1.208.812 1.560.603 4.017.064 4.696.707 6.017.589 3.811.039
dCn [P/cm³] 0 0 0 24.995.600 66.065.063 91.654.830 253.525.502 318.534.467 438.566.469 298.473.857
dCn/Cn [-] 0 0 0 0,006521 0,017234 0,02391 0,066137 0,083096 0,114408 0,077862
dCn/Cn/dX [1/µm] 0 0 0 0,491474 1.208.812 1.560.603 4.017.064 4.696.707 6.017.589 3.811.039
(start of Block 3)
04.06.2013 - 16:34:33 - 9s/9 - m
N analysed: 1277 P Sum(dCn): 5080,103 P/cm³
N total: 1277 P Sum(dCm): 2,2456 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 6.983.139 13.137.502 30.294.664 52.503.076 114.177.807 82.937.875 112.476.377 117.880.295
dN/N [-] 0 0 0,005468 0,010288 0,023724 0,041115 0,089412 0,064948 0,08808 0,092311
dN/N/dX [1/µm] 0 0 0,442925 0,77543 1.663.971 2.683.579 5.430.769 3.670.984 4.632.772 4.518.256
dCn [P/cm³] 0 0 27.780.321 52.263.604 120.518.214 208.867.707 454.222.092 329.943.409 447.453.461 468.951.325
dCn/Cn [-] 0 0 0,005468 0,010288 0,023724 0,041115 0,089412 0,064948 0,08808 0,092311
dCn/Cn/dX [1/µm] 0 0 0,442925 0,77543 1.663.971 2.683.579 5.430.769 3.670.984 4.632.772 4.518.256
(This is of course only an excerpt. There may be more blocks and there will be more size channels and more channel groups...)
Now, I want to store the dN values in one channel group, the dN/N in the next group etc. There should be 10 channels, one for each size class, and the data from the various blocks as sequential values in these channels. (I hope I wa able to explain this in a comprehensoble way...)
I can generate the groups and the respective channels:
set oChn = ochngrp.Channels(2) 'Der Kanal mit den mittleren Dm wird für die Kanalnamen der Gruppen weiterverwendet
for i = 1 to 25
sMyLine = file.getnextline
aChnData = split(sMyLine,vbTab,-1,vbTextCompare) 'Read one line and parse it into an array
if not Root.ChannelGroups.Exists(aChnData(0)) then 'test if a group named like array(0) exists
Set oChnGrp = Root.ChannelGroups.Add(aChnData(0)) ' and create one if necessary.
for j = 1 to oChn.Size
call oChnGrp.Channels.Add("" & oChn(j),eR64) 'add empty channels
next
end if
next
How would I read the data?
With a lot of nested i,j-for..next loops, I could read a line via
for j = 1 to root.channelgroups.count
aValue = split(file.getnextline)
for i = 1 to 10
root.channelgroups(j).values(i) = aValue(i)
next
next
' then jump to timestamp of next data block with some skiplines, read timestamp and reiterate
or I could try to read all values for one channel group and use skiplines, e.g.
for i = 1 to root.channelgroups.count
while file.position <> file.size
aValue(j) = split(file.getnextline)
file.skiplines(as many as are between the blocks)
wend
' now I have a 2dim array aValue to transfer into data channels.But how?
file.position = 1
next
Could file.GetStringBlock be of any help?
Has anybody tackled a similar file structure and could give me a clue or some code sniplet?
Thank you...
MichaelHi Michael,
I would just go line by line in the data file and send the array channel values (one value at a time) to the Channel.Values() property of the matching channel name. It's not going to be fast any way you do it in VBScript.
Brad Turpin
DIAdem Product Support Engineer
National Instruments -
When I try to download a software up date for another program in Binary File (eg. C Cleaner or a Microsoft file) in Firefox they all just come up in the download window as 'Canceled'? When I go to the destination folder the download file icon is there with 0 Kb's for size...Then when I click 'RETRY' the download it appears to download fully, but when I go into the destination folder the downloaded file is not there? I need to know if there is something in the Firefox options to resolve this problem or much more!!
If all .exe files are blocked, antivirus software is most likely configured to block them. See if you can download these with your antivirus and/or security software disabled.
-
Write structure to binary file and append
I have a very simple problem that I cannot seem to figure out, which I guess doesn't make it so simple, for me at least. I've made a little example .vi which basically breaks down what I want to to. I have a structure, in this instance 34 bytes of various data types. I would like to iteratively write this structure to a binary file as the data in the ACTUAL structure in my .vi will be changing through time. I'm having trouble getting the data to be appended to the binary file rather than overwriting it since the file size is always 34 bytes regardless of how many times I run the .vi or run the for loop.
I'm not an expert, but it seems if I write a 34 byte structure to a file 10 times, the final product should be a binary file of 340 bytes (assuming I haven't padded or prepended with size).
One really strange thing is that I keep getting error #1544 when I wire the refnum out wire to the file dialog input on the write function, but it works fine when I wire the file path directly to the write function.
Can someone please swoop in and save me from this remedial task?
Thanks for all the help. NI Forum rules!
Solved!
Go to Solution.
Attachments:
struct_to_bin.vi 13 KBDid you consider reading the text of the error message? Don't set the "disable buffering" input to true - just leave that input unwired. Why do you want to disable buffering?
As a general practice, the file refnum should be stored in a shift register around the for loop instead of using loop tunnels, that way if you have zero iterations you will still get the file refnum passed through correctly. Also, there is no need for Set File Position inside the loop, since the file location is always the end of the last write unless it's specifically moved to another location. You might want it once outside the loop after you open the file, if you're appending to an existing file. -
I have written a binary file with a specific header format in LABVIEW 8.6 and tried to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?
I can think of two possible stumbling blocks:
What are your 8.6 options for "byte order" and "prepend array or string size"?
Overall, many file IO functions have changed with LabVIEW 8.0, so there might not be an exact 1:1 code conversion. You might need to make some modifications. For example, in 7.1, you should use "write file", the "binary file VIs" are special purpose (I16 or SGL). What is your data type?
LabVIEW Champion . Do more with less code and in less time . -
How to Convert Binary Data in Binary File
hi,
my telecom client puts a binary file which is asn.1 encoded with BER.
how to handle binary data in java.
how to convert binary to hexa to ascii format
how to convert binary to octet to ascii format
please help me in this.
regards,
s.mohamed asifYou don't need to convert the data.
Only you can do is print it in that formats, like it:
public static String byteArrayToHex(byte[] b) {
StringBuffer sb = new StringBuffer();
for (int i = 0; i < b.length; i++)
sb.append(Integer.toHexString(b&255) + " ");
return sb.toString();
take a look at this
http://java.sun.com/docs/books/tutorial/essential/io/ -
Hi.
i want to load cap file (binary code) to java card(GP 2.1.1).
load file data block means binary code of cap file , right?
and im wondering how do i change structure for BER-TLV?
binay codes are [50 4B 03 04 14 00 08 00 08 00 CF 56 86 3D 00 00 00 00 00 00 00 00 ....]
80 E8 00 00 FA C4 *82 0A 29* 50 4B 03 04 14 00 08 00 08 00... (binary code)...
i found out tag"82" means have length with 2 more byte (0x0A29 for now)
do i need to add to load file data block structure somthing else ?
thanks845467 wrote:load file data block means binary code of cap file , right?AFAIK no, not whole cap file. You have to find in cap file components such as "header.cap", "directory.cap" and others and download only them. These components are described in JCVM specification. You must download them in order defined in specification.
845467 wrote:80 E8 00 00 FA C4 82 0A 29 50 4B 03 04 14 00 08 00 08 00... (binary code)...APDU must look something like that
80 E8 00 00 FA C4 82 XX XX 01 YY YY DATA 02 ZZ ZZ DATA ....
XX XX - lenght of C4 tag
01 YY YY - name and length of header.cap component
02 ZZ ZZ - name and length of directory.cap component
Note, debug.cap and descriptor.cap are optional. In case of JCOP cards they are downloaded out of C4 tag.
Maybe you are looking for
-
Can't get video to show up when Export Media
Not sure what kind of noobie mistake I'm making, but when I try to Export a clip to video in Premiere Pro CS4, (via File>Export>Media>and then selecting flv or h264), I'm not getting any video output. The audio comes across fine, but the screen is b
-
I am having problem with my F9 AND F10 KEYS TO CONTROL sound
i think my f9 and f10 keys are not working properly.. This question was solved. View Solution.
-
FCPX CRASHES WHEN EDITING TITLES
FCPX crashes when edtiting titles
-
Let me begin by saying I was a Verizon customer for about 7 years, leaving last year for a prepaid plan offered by another company. I wasn't happy with the new plan and, after just two month, I ported by number back to Verizon and agreed to sign up
-
Converting unicode to cp1251(windows cyrillic)
Hi, Can someone please let me know how I could programmatically convert unicode string to cp1251. Thank you.