How does LabVIEW calculate filter coefficien​ts?

I am attempting to develop a model for a post - processing routine that uses a cascaded IIR lowpass filter, in order to demonstrate to a customer the performance of our data acqusition system, without divulging our proprietary source code. Does anyone know how LabVIEW calculates filter coefficents?

Greetings efelgemacher!
I'm going to take a guess, but don't quote me on this. Both Matlab and Labview use IIR to synthesize analog filter prototypes....in a way, it's sort of "dumbing down" of an IIR response to get an analog equivalent. If you "drill down" through any of the Analog filter vi's you will see that they ALL use Digital filters just with analog paramaters plugged in.
At any rate, the polynomials have been catalogued for decades now, and I suspect they reside in a library somewhere in Labview..at least in "low resolution" form, rather than being generated from scratch every time. It would take very little reiteration to generate a couple of more decimal places using the IIR engine.
Again, just an educated guess.
Eric
Eric P. Nichols
P.O. Box 56235
North Pole, AK 99705

Similar Messages

  • How does LabVIEW calculate Euler Angles from Direction Cosine Matrix?

    I'm looking to clean up my block diagram by converting LabVIEW math functions to Mathscript; however, I can't find out exactly which rotation convention is being used to calculate Euler angles from the Direction Cosine Matrix. Any ideas?
    Here's what I'm using now (see below)... the phi angle is the only one (of three angles) not correcting like the LabVIEW math functions. I've also attached the .vi that demonstrates the calculation both ways.
    %calculate Euler angles from DCM
    %'DCM' is a 3x3 direction cosine matrix
    phi = atan((DCM(2,3))/(DCM(3,3)));
    theta = acos(DCM(3,3));
    psi = atan2(DCM(1,3),DCM(2,3));
    Solved!
    Go to Solution.
    Attachments:
    Apply Direction Cosines 2 ways.vi ‏36 KB

    BenMedeiros,
    Here is our documentation on that Driection Cosine Matrix: http://zone.ni.com/reference/en-XX/help/371361D-01​/gmath/euler_angles_to_direction_cosines/
    it includes the formula as well.
    If you decide to use the 3D cartesian Rotation VI you can find that info here: http://zone.ni.com/reference/en-XX/help/371361G-01​/gmath/3d_cartesian_coord_rotation_direction/
    I hope this helps
    Sam S
    Applications Engineer
    National Instruments

  • How does Labview support redundancy?

    I have a control system and Labview will act as the means of control/monitoring. However, I want to have a primary and secondary computer with duplicate system setups. How does Labview support this type of redundancy such that if the primary goes down, the secondary will take control seamlessly.

    Hi,
    LabVIEW has many capabilities that can be combined to create a very robust redundant system. Some of these features include the following:
    Archiving databases periodically
    Monitoring a process on a remote machine
    Executing an application on the backup when the primary has failed
    The most straight forward method of implementing the monitoing is to use discrete memory tags to create a heartbeat. When the heartbeat on the Primary machine has stopped, the backup should take over and begin execution.
    If you have specific questions about how to implement this, feel free to call support by going to www.ni.com/ask.
    Regards,
    Mike

  • How does labview create a hyperlink in a web page?

    Hi
    I was wondering, how does labview create a hyperlink in a web page?
    Thank you.

    Hi,
    What do you wish to do?
    The most primitive way is to generate a .HTML file.
    Type your HTML codes (Using string constant) and save it using "Write Chars To File.vi" using a filename with extension .html
    Hope I am addressing your question.
    Cheers!
    Ian F
    Since LabVIEW 5.1... 7.1.1... 2009, 2010
    依恩与LabVIEW
    LVVILIB.blogspot.com

  • How does SAP calculate tax for employees who pay SFSS(Australia)?

    Hi,
        Good day. How does SAP calculate tax for employees who pay SFSS (Student Financial Supplement Scheme). The EE is under tax scale 7 - No Leave Loading. Please advise. Thank you.

    hai..
    check QTAX sub schema

  • How does labVIEW use ActiveX controls?

    How does labVIEW use activeX controls?
    I recently wrote an activeX control in VB and noticed that it would not work if the control was set so that its properties could not be set in ambient mode. This may suggest that labVIEW uses some activeX controls only in ambient mode.
    Is this the case or are there more complexities?

    Dan,
    Which version of LabVIEW do you have?
    As per the KnowledgeBase bellow, in versions of LabVIEW prior to 5.1 you would get errors accessing the ambient properties.
    ActiveX Controls in LabVIEW 5.0.1 Containers Cannot Access Ambient Property
    Do your controls use the Ambient.UserMode to determine when the control is being used in a development environment? When embedding ActiveX controls into a container on the front panel, by default, the ActiveX control generates and responds to events, i.e. it is running, even when LabVIEW is in edit mode.
    Right-click an ActiveX container and select Advanced»Design Mode from the shortcut menu to display the container
    in design mode while you edit the VI. In design mode, events are not generated and event procedures do not run. The default mode is run mode, where you interact with the object as a user would.
    Information can be found in the LabVIEW help files
    Zvezdana S.

  • How does labview use nce files

    I have been using MAX sometime ago but did not make full use of it.
    - How can I use labview to communicate with nce files?
    - Does Labview communicate with MAX?
    And I cannot find the document knowledge base to using MAX?
    Only I can find in the help files but not enough. I need more info, 
    regards,
    Clement

    astroboy,
    Here is a knowledge
    base article discussing the possible reasons for your error. 
    Also, try and start
    with some of our shipping examples. 
    After importing your MAX configuration I used Cont Acq&Graph Voltage-Int Clk.vi and everything ran just fine.  You can get to this example by
    going to Help>Find Examples.  Then go
    to Hardware Input and Output > DAQmx > Analog Measurment > Voltage.
    Ryan N
    National Instruments
    Application Engineer
    ni.com/support

  • How does optimiser calculate profit for profit maximisation

    Hi Experts,
    I have a basic question-
    when we work on 'profit maximisation' how does the system calculate profits. As I understand the calculation is revenue- total cost. costs is what we define in the system. But how is the revenue calculated?
    What it the logic where profit maximisation is selected over cost minimisation?
    Regards
    Manotosh

    Hi Manotosh,
    For answering the first part of your question, I am just copy pasting the F1 help for the field from Optimizer profile:
    "Profit Maximization
    Use
    It is the optimizer's objective, when using either of the two possible options: cost minimization or profit maximization, to determine the most cost-effective solution based on the costs defined in Cost Maintenance.
    If you choose Profit Maximization, the system interprets the Penalty for ND field (penalty costs for non-delivery) in the product master (tab page SNP 1) as lost profits. In this instance, the total profit achieved by a plan is calculated according to the following formula: Total of all delivered location products, multiplied by the penalty costs for non-delivery that have been defined for these location products, minus all the costs defined in the optimizer cost profile (such as production costs, storage costs and transportation costs). The total profit is displayed, when you run the optimizer in interactive planning within Supply Network Planning (on the Solutions tab page)."
    For second part, someone who has worked on solution design in a project where Optimizer was used might possibly give a good answer. I am not very sure, though I can make a guess as follows:
    You would go for cost minimization in a scenario where your requirement would be to minimize overall cost, even if it leads to non-fulfillment of requirement. Could be a scenario where e.g. lost sales are not very critical for you. e.g. products like generic bulbs, etc. On the other hand, where you want that the requirements fulfillment should also be one of the key objectives, you would go for profit maximization. This could be a case where brand plays a major role e.g. TV, costly gizmos, etc.
    Hope someone else will correct me, or add to this.
    Thanks - Pawan

  • How does high pass filter determine frequency

    I Am having a hard time understanding how the high pass filter determines high vs low frequency. What value is it measuring. ? Change in RGB values or Contrast ? I see the words sharpness or blur , but what values determine amount of blurriness or sharpness? Is there and actual frequency being measures or is that just a metaphor To compare it to the audio high pass filter ?
    Any insight into the inner workings of this filter is appreciated.

    Some further explanations:
    Audio signals:
    frequency = cycles per second
    This is the frequency in the time domain.
    An arbitrary signal can be represented by a bundle of harmonic signals (sine, cosine)
    plus a DC-part (direct current, non periodical component).
    A typical high-pass filter for audio signals:
    The cut-off frequency fc is found at the transition between increasing gain
    (low frequency) and fixed gain 1.0 (high frequency).
    frequency   gain (attenuation)
    0.01 fc   0.01
    0.10 fc   0.1
    1.00 fc   0.707
    10.0 fc   1.0
    100  fc   1.0
    A typical high-pass filter for digital images:
    frequency = cycles per unit length or (better) cycles per pixel
    This is the frequency in the spatial domain.
    The highest frequency is 0.5 cycles per pixel (alternating black and white pixels)
    An arbitray row in a digital image can be represented by a bundle of harmonic signals
    plus a DC-part.
    Each channel R,G,B is filtered individually, one after the other.
    Frequency and gain are as above.
    A high-pass filter applies an attenuation <<1 to low frequency signal components
    and removes the DC-component entirely. This would result in a black background.
    Therefore the background is shifted to R=G=B=128 (8bit per channel) or
    R=G=B=0.5 (normalized).
    Sharp edges contain stronger high frequency components. These are retained.
    http://en.wikipedia.org/wiki/High-pass_filter
    In this doc we find a simple implementation:
    y[0] := x[0]
       for i from 1 to n
         y[i] := a * y[i-1] + a * (x[i] - x[i-1])
       return y
    x[i] are values R (or G or B) in a row at column i in the original image
    y[i] are values R (or G or B) in the same row at column i after filtering.
    Factor (a) contains indirectly the cut-off frequency.
    This example does not yet apply the shift to R=G=B=128.
    The implementation can be different (non-recursive filter instead of recursive, as above,
    higher order instead of first order, using a filter kernel 3*3 or 5*5 pixels instead of working
    in each row independently).
    I'm not trying to explain how Photoshop works!  There are so many alternatives.
    Best regards --Gernot Hoffmann

  • How does LabView choose which core to use in a multi-core system

    I have to share my multi-core processor with another application.  When this other application runs, it consumes 100% of one of the cores on the system for several seconds.  I am worried about this other application blocking part of my control code, resulting in delayed control signals and possible loss of control. 
    1.  When one core is completely occupied, will LabVIEW put all operations into the other core? (I am guessing this is yes)
    2.  If an operation is occurring on a core that is subsequently 100% subscribed, will LabVIEW move the operation to another core?  Does LabVIEW (or XP) prevent the other application from stalling operations already in progress? 
    The operations I am most concerned with are for the Multifunction DAQ and serial port calls.   
    I am working on a good sized (>100 vi) control system application in LabVIEW 8.2.1.  It is running on Windows XP.   

    1. Yes, Windows will allocate LabVIEW to an open thread on the unused core.
    2. Again, yes, Windows will check for available threads and move LabVIEW to any that are open, it doesn't matter what core that is.
    You can also force LabVIEW code, or parts of it, to run on a specific processor using the timed loop structure. You can find help for setting that up here.
    Chris Van Horn
    Applications Engineer

  • How does LabVIEW convert string to char* when passing them to a Call Library Node?

    I have a program which calls C++-based libraries which themself use external libraries (Qt). I pass strings to my library through the Call Library Node, and was wondering how the formatting goes? I have to interpret the char* according to Latin1, UTF-8, UTF-16, ... to convert them to a string which the Qt libraries understand. I need to use char* instead of LV strings, because the library is indepent of LabVIEW.
    It seems that interpreting the char* as Latin1 (default) does not work on Korean systems (for one of our customers), which is understandable when you know that the Latin character set does not know Korean signs. Anyone knows how the char* should be interpreted then? In other words, for non-Latin languages, what exactly is passed to the DLL?

    I don't think that we reinterpret your string in anyway, just reformat the data so that it can be passed to your dll. 
    So assuming you are getting text from, say, keyboard editing, the text should be in the ANSI codepage that the system is running under.  That is, if you are running Windows using an English locale, it will be codepage 1252 (Windows Western, Latin 1), if you are running Windows with a Korean codepage iirc it will be codepage 949, Unified Hangul code.
    If you are filling the string with data that you get from an instrument or some other fashion, it could really be anything.
    Here is a list of the codepages that Windows knows about
    http://msdn2.microsoft.com/en-us/library/ms776446(VS.85).aspx
    I do have some experience with Qt as well, and I think that the function you are looking for to create, say, a QString from this is:
    QString::fromLocal8Bit
    but I am not 100% certain about this as I am not a Qt expert.
    Jeff Peters
    LabVIEW R & D
    Message Edited by jpeters on 04-02-2008 12:38 PM

  • How does LabVIEW Render the data in 2D Intensity Graphs?

    Hello,
    I was hoping someone could explain to me how LabVIEW renders it's 2D intensity data. For instance, in the attachment I have included an image that I get from LabVIEW's intensity graph. The array that went into the intensity graph is 90000x896 and the width/height of the image in pixels is 1056x636.
    Something I know from zooming in on these images as well as viewing the data in other programs (e.g. Matlab) is that LabVIEW is not simply decimating the data, it is doing something more intelligent. Some of our 90000 lines have great signal to noise, but a lot of them do not. If LabVIEW was simply decimating then our image would be primarily black but instead there are very obvious features we can track.
    The reason I am asking is we are trying to do a "Live Acquistion" type program. I know that updating the intensity graph and forcing LabVIEW to choose how to render our data gives us a huge performance hit. We are already doing some processing of the data and if we can be intelligent and help LabVIEW out so that it doesn't have to figure out how to render everything and we still can get the gorgeous images that LabVIEW generates then that would be great!
    Any help would be appreciated! Thanks in advance!
    Attachments:
    Weld.jpg ‏139 KB

    Hi Cole,
    Thank you for your understanding.  I do have a few tips and tricks you may find helpful, though as I mentioned in my previous post - optimization for images or image-like data types (e.g. 2D array of numbers) may best be discussed in the Machine Vision Forum.  That forum is monitored by our vision team (this one is not) who may have more input.
    Here are some things to try:
    Try adjusting the VI's priority (File»VI Properties»Category: Execution»Change Priority to "time critical priority")
    Make sure Synchronous Display is diasbled on your indicators (right-click indicator»Advanced»Synchronous Display)
    Try some benchmarking to see where the most time is being taken in your code so you can focus on optimizing that (download evaluation of our Desktop Execution Trace Toolkit)
    Try putting an array constant into your graph and looping updates on your Front Panel as if you were viewing real-time data.  What is the performance of this?  Any better?
    The first few tips there come from some larger sets of general performance improvement tips that we have online, which are located at the following links:
    Tips and Tricks to Speed NI LabVIEW Performance
    How Can I Improve Performance in a Front Panel with Many Controls and Indicators?
    Beyond that, I'd need to take a look at your code to see if there's anything we can make changes to.  Are you able to post your VI here?
    Regards,
    Chris E.
    Applications Engineer
    National Instruments
    http://www.ni.com/support

  • How does LabView 8 handle Ring constants?

    Hi all,
    This is probably more for NI guys, but I am wondering how LabView 8 handles text rings?  In the past, constants dropped into a VI that are of type "enum" will be updated when the typedef is updated, however, ring text constants will not.  Has this changed in LabView 8?
    Thanks,
    Jason

    Part 1 of 2
    Hi Chris,
    Please forgive me for differing on this point but you should have written
    When a control is strictly typed it will force all  [front panel] instances to be identical to the strict type definition and update all of the text constants.
    The "strict" part only takes care of the appearence and those changes only affect instance were they are seen by a user. The strings associated with the enum do not affect block diagram constants.
    The attached image and zip illustrates that the representation of an enum and a ring a very different. An enum has the defined strings "hidden" in the wire and the numeric value can only be one of the valid indexes of the defined strings. In this example I have shown three different constructs that illustrate the fundemental difference between rings and enums.
    1) Case Structures - An enum driven case will pickup the valid choices as defined by the enum's definition. A ring does not carry this info in the wire so it is NOT possible to label the cases to match the ring. This brings up another compliation. If the case structure does not have a default case defined there must be a case for each possible selection value. The strings of an ring can be defined at run-time. Is LV supposed to re-compile the case structures code every time the ring is re-populated? In the case of an enum driven case, this is not an issue because enums that are used in a VI that is running or reserved to run can not be editied.
    2) The "Format into string" can pick-up the valid enum strings by decoding the type descriptor of the enum wire. This is not the case with a ring because the ring is just a number.
    3) The "Flatten into string" function will return the type destriptor of whatever is wired to it. The typed descriptor contents of a enum shows that every valid selection for that enum is defined in the enum wire. Please note that the type descriptor for a ring has no information about any strings.
    End of part 1 of 2
    Ben
    Message Edited by Ben on 10-15-2005 10:41 AM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    ring vs enum.JPG ‏78 KB

  • How does Labview stores the binary data - The header, where the actual data starts etc.

    I have problem in reading the binary file which is written by labview. I wish to access the data (which is stored in binary format) in Matlab. I am not able to understand - how the data will be streamed in to binary file (the binary file format) when we save the data in to a binary format through Labview program. I am saving my data in binary format and I was not able to access the same data in Matlab.
    I found a couple of articles which discusses about converting Labview to Matlab but What I really wanna know is - How can I access the binary file in Matlab which is actually written in Labview?
    Once I know the format Labview uses to store its binary files, It may be easy for me to read the file in Matlab. I know that Labview stores the binary files in Big Endian format which is
    Base Address+0 Byte3
    Base Address+1 Byte2
    Base Address+2 Byte1
    Base Address+3 Byte0
    But I am really confused about the headers, where the actual data start. Hence I request someone to provide me data about - How Labview stores the Binary Data. Where does the original data start. Below attached is the VI that I am using for writing in to a binary file.
    Attachments:
    Acquire_Binary_LMV.vi ‏242 KB

    Hi Everybody!
    I have attached a VI (Write MAT file.vi - written in LabVIEW 7.1) that takes a waveform and directly converts it to a 2D array where the first column is the timestamps and the second column is the data points. You can then pass this 2D array of scalars directly to the Save Mat.vi. You can then read the .MAT file that is created directly from Matlab.
    For more information on this, you can reference the following document:
    Can I Import Data from MATLAB to LabVIEW or Vice Versa?
    http://digital.ni.com/public.nsf/websearch/2F8ED0F588E06BE1862565A90066E9BA?OpenDocument
    However, I would definitely recommend using the Matlab Script node (All Functions->Analyze->Mathematics->Formula->Matlab Script). In order to use the Matlab Script node, you must have Matlab installed on the same computer. Using the MatlabScript node, you can take data generated or acquired in LabVIEW and save it directly to a .mat (Matlab binary) file using the 'save' command (just like in Matlab). You can see this in example VI entitled MathScriptNode.vi - written in LabVIEW 7.1.
    I hope this helps!
    Travis H.
    LabVIEW R&D
    National Instruments
    Attachments:
    Write MAT file.zip ‏189 KB

  • How does labview detect end of file?

    Hello!
    My question is how labview detects the end of file at the Read from Spreadsheet File VI.  What I would like to do is basically to read from a  text file a number of numeric values and afterwards build an array containing these values. Till now I have made a vi that uses the aforementioned vi and put it in a while loop that its conditional terminal is connected to the EOF? but it never stops. Could you please suggest a solution to my problem? (I have already seen a similar question in this forum but the answers didn't help me..) The VI along with an example file to read are attached.
    Thank you very much
    Attachments:
    readfromfile.vi ‏9 KB
    Book1.txt ‏1 KB

    Please take off the while loop. and directly connect the indicator to the all rows output of the VI.
    That will give you a 2D array of whatever is there in the spreadsheet.
    Hope this helps. 

Maybe you are looking for