Data plugin to append to channels in different channel groups
Hello
I try to import particel counter data from a text file which is organized in a difficult way:
Data come in blocks with each block taken at one time. Each block consists of a table of tab-separated data. e.g.
(start of Block 1)
04.06.2013 - 16:34:13 - 10s/10 - m
N analysed: 1521 P Sum(dCn): 5444,470 P/cm³
N total: 1521 P Sum(dCm): 1,7536 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 0 6.155.882 29.521.097 101.420.330 83.226.951 85.545.715 159.120.955 158.057.455
dN/N [-] 0 0 0 0,004048 0,019414 0,066696 0,054732 0,056256 0,104641 0,103941
dN/N/dX [1/µm] 0 0 0 0,305126 1.361.669 4.353.255 3.324.322 3.179.709 5.503.848 5.087.496
dCn [P/cm³] 0 0 0 22.040.393 105.696.730 363.123.274 297.984.071 306.286.127 569.713.408 565.905.674
dCn/Cn [-] 0 0 0 0,004048 0,019414 0,066696 0,054732 0,056256 0,104641 0,103941
dCn/Cn/dX [1/µm] 0 0 0 0,305126 1.361.669 4.353.255 3.324.322 3.179.709 5.503.848 5.087.496
(start of Block 2)
04.06.2013 - 16:34:23 - 10s/10 - m
N analysed: 1071 P Sum(dCn): 3833,350 P/cm³
N total: 1071 P Sum(dCm): 0,3581 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 0 6.981.271 18.451.972 25.599.194 70.809.673 88.966.677 122.491.615 83.363.748
dN/N [-] 0 0 0 0,006521 0,017234 0,02391 0,066137 0,083096 0,114408 0,077862
dN/N/dX [1/µm] 0 0 0 0,491474 1.208.812 1.560.603 4.017.064 4.696.707 6.017.589 3.811.039
dCn [P/cm³] 0 0 0 24.995.600 66.065.063 91.654.830 253.525.502 318.534.467 438.566.469 298.473.857
dCn/Cn [-] 0 0 0 0,006521 0,017234 0,02391 0,066137 0,083096 0,114408 0,077862
dCn/Cn/dX [1/µm] 0 0 0 0,491474 1.208.812 1.560.603 4.017.064 4.696.707 6.017.589 3.811.039
(start of Block 3)
04.06.2013 - 16:34:33 - 9s/9 - m
N analysed: 1277 P Sum(dCn): 5080,103 P/cm³
N total: 1277 P Sum(dCm): 2,2456 mg/m³
X [µm] 0,148647 0,159737 0,171655 0,184462 0,198224 0,213013 0,228905 0,245984 0,264336 0,284057
dN [P] 0 0 6.983.139 13.137.502 30.294.664 52.503.076 114.177.807 82.937.875 112.476.377 117.880.295
dN/N [-] 0 0 0,005468 0,010288 0,023724 0,041115 0,089412 0,064948 0,08808 0,092311
dN/N/dX [1/µm] 0 0 0,442925 0,77543 1.663.971 2.683.579 5.430.769 3.670.984 4.632.772 4.518.256
dCn [P/cm³] 0 0 27.780.321 52.263.604 120.518.214 208.867.707 454.222.092 329.943.409 447.453.461 468.951.325
dCn/Cn [-] 0 0 0,005468 0,010288 0,023724 0,041115 0,089412 0,064948 0,08808 0,092311
dCn/Cn/dX [1/µm] 0 0 0,442925 0,77543 1.663.971 2.683.579 5.430.769 3.670.984 4.632.772 4.518.256
(This is of course only an excerpt. There may be more blocks and there will be more size channels and more channel groups...)
Now, I want to store the dN values in one channel group, the dN/N in the next group etc. There should be 10 channels, one for each size class, and the data from the various blocks as sequential values in these channels. (I hope I wa able to explain this in a comprehensoble way...)
I can generate the groups and the respective channels:
set oChn = ochngrp.Channels(2) 'Der Kanal mit den mittleren Dm wird für die Kanalnamen der Gruppen weiterverwendet
for i = 1 to 25
sMyLine = file.getnextline
aChnData = split(sMyLine,vbTab,-1,vbTextCompare) 'Read one line and parse it into an array
if not Root.ChannelGroups.Exists(aChnData(0)) then 'test if a group named like array(0) exists
Set oChnGrp = Root.ChannelGroups.Add(aChnData(0)) ' and create one if necessary.
for j = 1 to oChn.Size
call oChnGrp.Channels.Add("" & oChn(j),eR64) 'add empty channels
next
end if
next
How would I read the data?
With a lot of nested i,j-for..next loops, I could read a line via
for j = 1 to root.channelgroups.count
aValue = split(file.getnextline)
for i = 1 to 10
root.channelgroups(j).values(i) = aValue(i)
next
next
' then jump to timestamp of next data block with some skiplines, read timestamp and reiterate
or I could try to read all values for one channel group and use skiplines, e.g.
for i = 1 to root.channelgroups.count
while file.position <> file.size
aValue(j) = split(file.getnextline)
file.skiplines(as many as are between the blocks)
wend
' now I have a 2dim array aValue to transfer into data channels.But how?
file.position = 1
next
Could file.GetStringBlock be of any help?
Has anybody tackled a similar file structure and could give me a clue or some code sniplet?
Thank you...
Michael
Hi Michael,
I would just go line by line in the data file and send the array channel values (one value at a time) to the Channel.Values() property of the matching channel name. It's not going to be fast any way you do it in VBScript.
Brad Turpin
DIAdem Product Support Engineer
National Instruments
Similar Messages
-
How do I check to see if a channel group exists in the data portal?
I have a list of channel groups in the data portal but sometimes some channel groups are not created all the time. How do I check using a script if a channel group exists or was created.
Hello jreynolds!
With the command 'GroupIndexGet'. It returns 0 if the group does not exist.
Matthias
Matthias Alleweldt
Project Engineer / Projektingenieur
Twigeater? -
Binary file data plugin : append block values to channel
I need to create a data plugin for a unique binary file structure, which you can see in the attached graphic.
With these objects
Dim sBlock : Set sBlock = sFile.GetBinaryBlock()
sBlock.Position = ? 'value in bytes
sBlock.BlockWidth = ? 'value in bytes
sBlock.BlockLength = ? 'value in bits
I have the possiblity to read chunks from my binary file. At the end, I want to have each signal in a respective channel. I could manage to extract signals 1 and 2, as they only have one value in each block with a known byte-distance in between. How can I extract the other channels, that have 480 successive values in each block? I could probably write a loop and read the specific signal part in each block, but how can I append these parts to the relevant channels? I tried by creating a new channel and then merging them, but unfortunately functions like ChnConcat are not working in a data plugin. Any ideas?
/Phex
PS. Of course I could create a hideous plugin with running GetNextBinaryValue() throught the whole file, but that doesn't seem to be a smart idea for a 2 GB file size.
Attachments:
KRE_DataPlugin_schematic.JPG 84 KBPhex wrote:
@usac:
Your workaround seems to work at least for part of the file. If I loop it through the whole 1.5 GB file I am getting the DLL error "The application is out of memory". There is enough dedicated RAM available, but I guess this is x86 related.
Hello Phex,
Have you tried running this Script in the DIAdem 64-bit preview version (which gets you access to more than 2 GB or RAM for your application) that can be found at http://www.ni.com/beta and can be installed in parallel to the 32-bit version.
It might get you around the "out of memory" issue ...
Otmar
Otmar D. Foehner
Business Development Manager
DIAdem and Test Data Management
National Instruments
Austin, TX - USA
"For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary." -
Write Data [Channel] Replacing Data instead of Appending
Hey Guys,
Apologies if this is really straightforward but I can't seem to figure out why the write data block isnt appending data on the tdms file, instead its simply overwriting the last value of temperature in the file, so that only the last temperature value can be viewed.
Heres a screenshot of the tdms document, having only the last value of temperature aquired in it
and the block diagram, the stuff in red is the part writing it to a file and is definitley set to append data.
I thought it may be due to the fact its in the while loop, using write to measurement file works and its also inside the while loop (I dont wish to use it as it can't set as customisable properties).
I would really appreciate any help on the matter, I'm rather new to Labview.
Thanks,
PeteHey Pete,
I have written a simple VI which will show you how to use the lower level functions, rather than those express VI's that your using you have any questions about the functions that are being used please feel free to ask. I have set the group name as the date and channel name is strain, I've then just fed in random data. I have saved it to 2009 if you require it as an earlier version please let me know.
Regards
Andrew George @ NI UK
Attachments:
TDMS low level.vi 9 KB -
Delete or replace (NOT append) TDMS Channel data
Is there a way to delete channel data that has already been written to a TDMS file?
Just writting the new data tot channel data appears to append the new to the old.
I don't see a way of deleting OR replacing the old dat with the new.
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction
Solved!
Go to Solution.Jarrod S. wrote:
You know what's sad, Ben, is I was the one who wrote those TDM -> TDMS and TDMS -> TDM conversion VIs that the poster used in the example to delete a TDMS channel.
All is not lost.
Repost with a link to the same solution. That way I can mark your new reply so new users will find the example with a double bonus of you re-taking that solution and picking up some more Kudos along the way.
Go for it!
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction -
How to write a Data Plugin to access a binary file
hi
Im a newbee to DIAdem, i want to develop a data plugin to access a binary file with any number of channels.For example if there around 70 channels, the raw data would in x number of files which will contain may be around 20 channels in each file . Raw file consist of header(one per file), channel sub header(one per channel),Calibration Data Segment(unprocessed datas) and Test data segments(processed data)....
Each of these contains many different fields under them and their size varies ....
Could suggest me some procedure to carry out this task taking into consideration of any number of channels and any number of fields under them....
Expecting your response....
JhonJhon,
I am working on a collection of useful examples and hints for DataPlugin development. This document and the DataPlugin examples are still in a early draft phase. Still I thought it could be helpful for you to look at it.
I have added an example file format which is similar to what you described. It's referred to as Example_1. Let me know whether this is helpful ...
Andreas
Attachments:
Example_1.zip 153 KB -
Hi,
I am trying to create few dataplugin and was running through the examples given in the help file but am poped up with error message telling "Object Required: 'Root'". I am not sure how to set this.
But fro mthe help file of data plugin, when I follow the steps and run the examples files given I have no problem. (DataPluginExample1.vbs).
I am unable to test and debug the script files due to this error. Could you give me a solution to this. Also, Can you also help me define this programatically like when I load a particular data file, how will I be able to run the associated script file to read the data and place them on data portal.
Expecting your quick response.
Thanks,
spiyaHi Brad & Schumacher,
Thanks a lot for the valuable information.
I was trying out the example given in the help file as below for which I have the error.
Dim MyProperty
For Each MyProperty in Root.Properties
If MyProperty.Default Then
Call MsgBox("Standard Root Property: " & MyProperty.Name)
Else
Call MsgBox("Root Property " & MyProperty.Name)
End If
Next
Can you please let me know what should I do make this work.
Coming to my dataplugin, I am able to read the channel names and display them too but, I face the following problems
1.I dont see the data as it appear in my file but some junk values gets generated, this is when I use real values for all the channels.
2.But, When I wanted to put the exact data as per the csv files and check I am contiously getting an error message with respect to the datatype for the add method. I am unable to solve it.
3.Can you also let me know how can I add the units to the corresponding channels.
Herewith I have attached my code and the sample of my data file. Could you please help me fix these problems. The first comments in the vbs tells the format of the csv file.
As I am running out of time it it would be great if you can give me your response at the earliest.
Thanks,
spiya
Attachments:
csv dp.zip 2 KB -
Issue in Data from DSO to DSO Target with different Key
Hello All,
I am having Issue in Data from DSO to DSO Target with different Key
Source DSO has Employee + Wage + Post numner as key and Target has Employee + Wage Type as key.
DSO semantic grouping works like Group By clause in sql, is my understanding right ?
Also if someone can explain this with a small example, it would be great.
Many Thanks
KrishnaDear, as explained earlier your issue has nothing to do with semantic grouping .
Semantic grouping is only usefull when you have written a routine in the transformation for calculations and in error handling .
Please go through this blog which explains very clearly the use of semantic grouping .
http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2013/06/16/semantic-groups-in-dtp
Now coming to your above question
DSO 1
Employee WageTyp Amount
100 aa 200
200 aa 200
100 bb 400
100 cc 300
If we have semantic group as Employee . If we have Employee as key of the target DSO and update type as summation .
then target DSO will have
Emp Amount
100 700
200 200
In this case Wage type will be the last record arriving from the data package . If the record 100 cc 300 is arrivng last then wage type will be cc .
2) Case 2
DSO 1
Employee WageTyp Amount
100 aa 200
200 aa 200
100 bb 400
100 aa 300
if we do Semantic grouping with Emp and Wage Type If we have Employee and wage type as key of the target DSO and update type as summation .
then target DSO will have
Emp Wage Amount
100 aa 500
200 aa 200
100 bb 400
Hope this helps . -
How to move data connections with SOAP web service in different environments in InfoPath Forms 2010
Hello,
I've an InfoPath Form where I've around 10 SOAP web service data connections. They are calling a custom web service for a custom business process. The web service URL has a query string parameter which identifies whether it's a Test web service or the Production
one. The web service URL looks like this:
http://server/webservice/wsdl?targetURI=testSPRead (for the Test environment)
http://server/webservice/wsdl?targetURI=ProdSPRead (for the Production environment)
When I develop the form in Dev environment, I use the Test web service URL and save the data connection as UDCX files in the data connection library. After completing the development, when I deploy this Form in Production, I update the URL in the UDCX
file in the Production data connection library, but when I run the Form in Production, it throws error 'Error occurred in calling the web service'. After doing more research, when I extracted the XSN file and opened Manifest.xsf file in Notepad, I found the
references of 'testSPRead' parameter.
So, in the UDCX file the web service URL is '/targetURI=ProdSPRead' but in the Manifest.xsf file, there is a reference of Test web service parameter which is 'testSPRead' and that's why it's throwing error.
For testing purpose, I updated the Manifest.xsf file and replaced all the occurrences of 'testSPRead' to 'ProdSPRead' and also updated all the relevant files of the data connections (like XML, XSF etc.) and saved the Manifest.xsf as Form.xsn and deployed
in Prod and it worked.
The question is - is this the right way of doing it? There should be a simple method in such cases where web service has conditional parameter to identify the Test and Production web service.
Does somebody know what is the right way of doing it? I also thought of adding 'double' data connections - one set of Test and another set of Production and call them by identifying the current SharePointServerRootURL, but that's a lot of work. I've 10 web
service data connections in my Form and in that case I'll be having 20 data connections and setting their parameters in different Rules is too much work.
Please advise. It's very important for me!
Thanks in advance.
AshishThanks for your response Hemendra!
I hope Microsoft improves this thing in subsequent patches of InfoPath 2010 or InfoPath 2013 because I don't think this is a very special requirement. This is failing the purpose of having UDCX files for data connections. Why the WSDL's parameter value
is being written in the Manifest.xsf and other XSF and XML files. InfoPath should always refer the URL and parameters from the UDCX files.
--Ashish -
How Can I open the aspx pop up panel and Where is my tamper data plugin ?
I use Firefox/3.6.12 and Windows7 Ultimate 32 bit - I cant open my admin pop up aspx panel. I have same problem about fckeditor. I try to close this page only one page. Firefox is asking (save this page) namely not more pages and I close but firefox is in the proccess. I think pop up seems open in firefox but I cant see. The other problem ; Tamper data plugin lost after updated from my tools menu. Thanks for everything
Hello,
'''Try Firefox Safe Mode''' to see if the problem goes away. Safe Mode is a troubleshooting mode, which disables most add-ons.
''(If you're not using it, switch to the Default theme.)''
* On Windows you can open Firefox 4.0+ in Safe Mode by holding the '''Shift''' key when you open the Firefox desktop or Start menu shortcut.
* On Mac you can open Firefox 4.0+ in Safe Mode by holding the '''option''' key while starting Firefox.
* On Linux you can open Firefox 4.0+ in Safe Mode by quitting Firefox and then going to your Terminal and running: firefox -safe-mode (you may need to specify the Firefox installation path e.g. /usr/lib/firefox)
* Or open the Help menu and click on the '''Restart with Add-ons Disabled...''' menu item while Firefox is running.
[[Image:FirefoxSafeMode|width=520]]
''Once you get the pop-up, just select "'Start in Safe Mode"''
[[Image:Safe Mode Fx 15 - Win]]
'''''If the issue is not present in Firefox Safe Mode''''', your problem is probably caused by an extension, and you need to figure out which one. Please follow the [[Troubleshooting extensions and themes]] article for that.
''To exit the Firefox Safe Mode, just close Firefox and wait a few seconds before opening Firefox for normal use again.''
''When you figure out what's causing your issues, please let us know. It might help other users who have the same problem.''
Thank you. -
IDOC Data record is appending with NULL characters instead of spaces.
Hi Gurus,
1) We have created a port with Japanese characters for MATMAS05 (IDOC type) and trying to download an IDOC into an XML file using the ADAPTER, the actual data is less than the length of the IDOC string so we need to append the remaining spaces to each data record which in turn fills the segment pad but whereas in NON-UNICODE server the data record is appending with NULL characters instead of spaces.
2) For Japanese port the receiver port name in XML file is appearing with some junk characters in NON-UNICODE client, whereas in UNICODE client it is displaying the correct port name with Japanese characters.
Your help will be appreciated.
Thanks in Advance.ORA-06512 indicates a numeric or value-error line 2 seems to show to the first statement.
Check the datatypes of your columns/items.
Try to issue an update manually in SQL*Plus to see if it works generally. -
How do I measure single AI data over more than 1 channel
I want to aquire AI data using a Lab PC+ card. Single point AI data from 1 channel is no problem (using AI_Vread) but I want to get data from 2 or 3 channels at the same time. My current program Borland C++ is very basic (my first programming attempt) getting AI data from 1 channel every 20 sec. How can I get the data from other channels as well ?
The AI_* functions are for single-channel, one-shot operations. For your card, you should use the Lab_ISCAN_* functions.
-
Installation with USI data plugin
Hi,
I have developed a tool using one of the USI data plugins. This plugin has it's own MSI installation. How do I include this in my own installation so it is automatically available to all users who install my tool? I am using LV8.6.
Thanks,
IanDear Ian
You need to create a batch file that will install the msiexec.exe during installation.
A) If you are using LabWindows/CVI, try this method:
The Launch After Installation option is only valid for files that
do not need the Windows shell to provide a launch application: .exe
and .bat files. If you want to include a MSI file into
your installer and launch this installer as well, you will need to
install a launcher batch file that take the path to the MSI files
as a command line input and calls msiexec.exe on the
files. You may find the %dest command line variable helpful for
specifying the installed location of the MSI files (see the Edit
Installer Dialog Box - Files Tab help for additional information).
Once you have created a batch file that executes your MSI file,
you should:
Navigate to the Files tab from the Edit Installer dialog
box. Under "Local Files and Directories", navigate to the location of
your batch file.
Click the Add File button which will add the batch file to
the location you selected in the "Installation Files and Directories"
section of that dialog.
Right click on your batch file entry in the "Installation Files and
Directories" section and select File Options.
Select the Launch After Installation option.
B) If using LabVIEW, try this method:
Please see this KnowLedgeBase Article:
http://digital.ni.com/public.nsf/allkb/B4D0F02AA62AAF0D862573BD0070679C?OpenDocument
Many thanks for using the NI forums and I look forward to your reply.
Kind regards
David
Applications Engineer
National Instruments UK -
Meta-Daten von Datensätzen mit Data-PlugIn durchsuchen
Hallo,
ich habe eine Frage bezüglich der Möglichkeiten des DataFinders. Wenn ich das soweit richtig verstanden habe, werden prinzipiell nur die Meta-Daten durchsucht. Gleichzeitig werden nicht nur die *.tdm-Dateien durchsucht, sondern zum Beispiel auch Textdateien, für die ein passendes Data-PlugIn verfügbar ist.
Doch diese Dateien haben doch noch gar nicht Meta-Daten im eigentlichen Sinne, oder? Was wird denn dann durchsucht? Bzw. kann man in einem Data-PlugIn festlegen, welche Daten zu den Meta-Daten gehören sollen?
Ich hoffe, meine Frage ist einigermaßen nachvollziehbar formuliert, für eine Antwort wäre ich sehr dankbar, da ich bald DIAdem in meiner Abteilung vorstellen muss (es soll demnächst vermutlich eingeführt werden und nun soll ich, als Praktikantin im Auftrag meines Ausbildungsbeauftragten, die Mitarbeiter davon überzeugen, dass dieses Tool eine Reihe Vorteile bietet).
Mit freundlichen Grüßen, DewiHallo Dewi,
ich hole ein wenig weiter aus, um den Zusammenhang zwischen DataPlugins und DIAdem besser zu verdeutlichen.
Letztlich basiert alles auf dem TDM-Datenmodell. Dieses Datenmodell stellt eine strukturierte Verwaltung von Daten zur Verfügung und besteht aus drei Ebenen: Haupt-Ebene, Kanalgruppen-Ebene und Kanal-Ebene. In der Haupt-Ebene können nahezu beliebig viele (65000) Kanalgruppen und in jeder Kanalgruppe beliebig viele Kanäle gespeichert werden. An jeder Ebene sind Standardeigenschaften (Autor, Gruppenname, Kanalname, Kanaleinheit, Minimal- und Maximalwerte des Kanals, etc. ) gespeichert. Ergänzend können an jeder Ebene beliebig viele eigene Eigenschaften (Versuchsnummer, Sensor-ID, Kalibrierdatum) hinzugefügt werden. Damit lässt sich eine TDM-Datei an fast beliebige Aufgabenstellungen anpassen.
Mit dem DataFinder kann nach all diesen Eigenschaften - also auch nach den selbst definierten - gesucht werden. Nach Massendaten - den Mess- oder/und Analysewerten - kann nicht gesucht werden.
Ein DataPlugin, das im einfachsten Fall ein VBScript ist, bildet ein fremdes Datenformat auf das TDM-Datenmodell ab. Im DataPlugin wird somit auch festgelegt, welche Eigenschaften an welche Ebene gespeichert werden und das wiederum ist wichtig um zu bestimmen, wonach der DataFinder suchen kann.
Weiter Infos sind auf den NI-Web-Seiten zu finden: DataPlugins, TDM-Datenmodell
Ich hoffe diese Infos helfen dir weiter.
Gruß
Walter -
Error -2500 while writing new channel groups / channels to tdm-file
I'm creating a tdm-file using the express vis provided by LabView. Everything goes fine in opening the file and setting the file parameters.
When i'm using the write data-express vi to create a new channel group and setting it's parameters i receive an "error -2500" accompanied with the call chain of the vi.
I'm setting the parameters by using a terminal on the express vi and by using the value setting in the express vi configuration. Any ideas what might cause this error?I solved it. I made the error to begine some of the properties names with a number, eg. 1s for 1 sigma. I renamed them to sigma_1s and it works.
Maybe you are looking for
-
Exchange 2013 SP1 on Windows Server 2008 R2 Enterprise - BSOD after DAG creation
Hi, We are running Exchange 2007 SP3 RU13 on Windows 2003 R2 with SP2 in a 2003 native AD environment and recently decided to upgrade to Exchange 2013. We installed a pair of new DELL R420 servers running Windows 2008 R2 Enterprise then threw Exchang
-
Lost iphone now replaced - how do I retrieve any messages sent to me in the interim period?
Hi there, I recently lost my iphone 4 on the underground. I put in a claim and have just received my replacement. It's been over a week since i lost my phone, i didnt have automatic sync with icloud set up before (i do now) and so my restore isn't as
-
Is it safe to only use one 8GB module in my MacBook Pro 13?
I am looking to upgrade my MacBook Pro 13: Is it safe to use just one module instead of two? Cheers!
-
ipad no longer functions states it is 'disabled' and to connect to itunes computer wont recognize ipad when connected thoughts??
-
Photos don't show up in the playback. They were there at first then disappeared for the playback. I can still see them where I inserted them in the project. Any ideas? This is for iMovie '09.