Waveform signal

Hi everyone.
I'm a beginner on Labview. I would like to ask about my problem described below:
I'm making a waveform signal and I want have another signal which has delay time as 1 min (the amplitude not change). Then I want to compare the amplitude of the two signals at the same time. Actually, I only find how to make a signal by discrete values (array) but I dont know how to find an amplitude at a defined time.
Solved!
Go to Solution.

Hi
it is expected to see the difference as 0, since you are comparing element 'i' of channel 1 with element 'i' of channel 2.
you are not comapring them as per their time instant.
so, instead of calculating the difference of the two Y arrays like that, you should do the following:
option 1: since the dt = 1 minute, you delete the first element of channel 1 Y array and then compare it with channel 2 Y array. (do not modify channel 2 Y array). then, you would be comparing Y elements which are at the same time instant.
option 2: use a For loop, with shift registers and compare the current value of channel 2 with previous iteration value of channel 1.
edit: try the attached code
Attachments:
test a.vi ‏18 KB

Similar Messages

  • How to convert a waveform signal into array of complex numbers

    How to convert a waveform signal into array of complex numbers.

    Hi Chaks,
    try this:
    (Get Waveform components, convert to complex DBL)
    Message Edited by GerdW on 01-28-2008 09:23 AM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    ToCDB.png ‏1 KB

  • How to show a waveform signal in data dashboard and UI builder ???

    Hey guys.
    I have made a program that collect analog data from two vibration sensors and then i use the signalexpress power spectrum to get it in the frequency domain.
    I would like to show the signals and the FFT in data dashboard and in UI builder and have some problems doing so.
    I'm using Network shared variables and web services.
    First question:
    It seems that I'm loosing some data when I go through the Network shared double variable. I know that this is becouse i'm using a double variable and I would like to send a waveform but that is not possible becouse the charts in UI builder and data dashboard do not support waveform or?
    Second question:
    This is almost the same problem as the first question.
    Its not possible to show the data directly from the signalexpress power spectrum becouse the XY graph in data dashboard only supports array of cluster of two numerics right ?
    Is it possible to convert the data from the signalexpress power spectrum to array of cluster of two numerics ?
    Or any other ideas would be appreciated

    Hi Genex.
    It is important to say, that the Network Shared Variables are not lossless. That might explain the behavior you are seeing. I would not expect the update rate of the UI Builder to be that high, which could lead to data loss.
    For more information about choose the right network protocol, wether it is only for monitoring or streaming all available data, see the following article:
    Using the Right Networking Protocol
    Also, I have posted a reply to your other post about your second question.
    Best Regards
    Alex E. Petersen
    Certified LabVIEW Developer (CLD)
    Application Engineer
    Image House PantoInspect

  • How do I measure the period of one TTL signal and the delay until a second TTL signal?

    Hi,
    I have a PCI 6024E board, the SCB-68 terminal box and am running LabView 6.1.  If needed I could install 7.1.
    I have two Honeywell sensors (HOA7720) which each consist of an infrared transmitter and a infrared reciever.  When the beam is broken they output 5V, when the beam is open it outputs 0V.  These are sensing the presence of a hole in two discs passing thru the sensors.  
    So as disc 1 rotates the signal is 5V until the hole passes thru sensor 1, then it drops to 0V until the hole passes.  Then it jumps back to 5 V. 
    Disc 2 is rotating at exactly the same RPM as disc 1 because they are coupled with a spring loaded coupling.  There is a delay between the first hole passing thru sensor 1 and the second hole passing thru sensor 2.  The delay increases with torque. 
    Using the oscilliscope function and connecting Signal1 and Signal2 to AIn0 and AIn1, I have verified that the TTL signals are there.  So electrically everything is working.  I just need to know how to measure the period and delay.  I'm familiar with A/D, but not with counters/gates etc and I think that's what is required here. 
    The first measurement needed is RPM.   Somehow I need to measure the period of the TTL signal. 
    The second measurement needed is the delay between the 1st falling edge and the 2nd falling edge.  
    I don't think it's possible to use the millisecond timer for anything, because the accuracy wouldn't be good enough.  At 3600RPM the disc will spin 60 revs per second, or one every 17ms.  A resolution of 17 samples per revolution is not good enough.    
    Help please!

    It worked!
    I ended up figuring it out today.  It's really simple.  I went analog in to AIn14 and AIn15 which were free. Then the program does a multichannel scan of the signals (collects array of 2 waveforms), Signal 1 and Signal 2 at some scan rate.  I used 100,000 S/s for 10,000 samples. Then I split the array into two separate waveforms.  Then I did an edge detect on each one and it returns the position in the array where the falling edge occurs.  The difference is the delay (after correcting for the scan rate), unless Sig2 comes first in which case the delay is the period - (or +) the difference
    To find the period I used a subset of the Signal 1 waveform from the first falling edge on to the end of the waveform, then did the same edge detect except set the reset setting to true instead of false so it wouldn't detect the low signal which was now at the position 0 in the waveform.  So it would ignore the initial low value, wait for it to go high, then detect the falling edge position.
    Anyway, it works awesome, seems very accurate and gives very stable readings.  3600rpm no problem.  It detects changes of one or two degrees in angular position, with a range from ~145 degrees to ~280 degrees

  • Convert signal to binary string

    Hi all : I get waveform signal (square signal) from scope .. and  I want to convert this signal  to binary string …   Can I do that in labview 8.5
    thanks

    elyan a écrit:
    Hi all : I get waveform signal (square signal) from scope .. and  I want to convert this signal  to binary string …  
    Hello,
    Is that what you want to do ?
    http://img514.imageshack.us/my.php?image=scopetost​ringls7.jpg
    sage Edité par tinnitus le 12-31-2008 05:35 AM
    Message Edité par tinnitus le 12-31-2008 05:36 AM
    CLAD / Labview 2011, Win Xp
    Mission d'une semaine- à plusieurs mois laissez moi un MP...
    RP et Midi-pyrénées .Km+++ si possibilité de télétravail
    Kudos always accepted / Les petits clicks jaunes sont toujours appréciés
    Don't forget to valid a good answer / pensez à valider une réponse correcte

  • Build array or merge signals?

    I know there are more than one way to merge waveform signals to display them in a waveform chart. The two I know are merge signal express vi and build array. But I was wondering which one of the two were more efficient. I once read somewhere in this discussion forum that "build array" is one of the more time consuming tasks.
    Not very critical to what I am doing (a loop of data acquition takes just a few milli seconds with build array) but was just wondering if one or the other was a better way to do it. Or there might both be the same.
    Thanks.
    Solved!
    Go to Solution.

    Whenever the size of a data structure changes, a new memory allocation needs to be made. It does not really depends on what primitive you use.
    Do the sizes actually change with each iteration or does the final size remain constant and you are just merging inputs of constant size?
    LabVIEW Champion . Do more with less code and in less time .

  • How to split digital waveform channels?

    Hi , could you please let me know how can I split the digital channels of a digital waveform(Let's say the digital waveform has 6 channels ) and graph them seperately.
    It is very easy to split analog waveform signals ( acquired using N channel N sample) but I don't know how to do it for digital waveforms. 

    tintin_99 wrote:
    Let's say I have acquired the digital signal at 1MS/s / Now I want to decimate the number of samples to 1kS/s. Again I know how to do it for analog signal using the decimate function. Do you know a way to do it on a digital signal?
    Decimation make sense only to ANALOG signals, as ANALOG signal are continuous and can have any value (within defined range) on the other hand, DIGITAL signals can have just two states (0 or 1), so when you talk about decimating a DIGITAL signal, you need to come up with an algorithm, how you want to knock off the sample data points (say every alternate sample)....!!
    I am not allergic to Kudos, in fact I love Kudos.
     Make your LabVIEW experience more CONVENIENT.

  • Custom waveform graph

    Hi, i'm making a virtual lab for my final exam at university, now i'm on the oscilloscope and i need to make a custom waveform graph to put on my image of an oscilloscope that have the grid make in photoshop. I wanto to know how to make an invisible waveform graph that show only the waveform and if is correct to enlarge the graph area like the fake scope grid or i must make other adjustment. Thanks and sorry for my english

    Hello Silversky,
    Don't worry for your english, I'm pretty sure I understood your concerns. In other words, you are wondering how to have a custom control to show up only the waveform signal and overlap it on a custom image representing an oscilloscope. This is a very challenging cosmetic task at first, however you can get the best form using Custom Controls and achieve your task:
    Customizing NI LabVIEW Controls and Indicators
    Customizing the Gauge Control in LabVIEW
    Moreover. I attached a Type Def showing how to customize a Waveform Graph close to the way your are looking for.
    Have a great day and let me know if you need additional help.
    Best regards
    Matteo C. - Test Engineer
    Attachments:
    Graph Folder.zip ‏540 KB

  • Save real-time data

    Hello!
    I'm at the begin with LabView. I'd like to verify my signal with an
    oscilloscope. After this, when i active a light, the power of this
    signal must be save. I have found the oscilloscope VI. I'm tring to
    modify it in order to save, when it's necessary, this power. How can i
    get the power from a waveform signal and save it continuosly (i.e. in
    "real-time mode") in a file?
    thank you for the help and sorry for my english...
    Andrea

    Hello Andrea:
    The simplest way to save Oscilloscope data is with the Wave File tools. The oscilloscope data will be in the form of a 1D array. I assume you know what the impedance of the load is that you're measuring the power at. If you do, the power is simply E^2/R. For RMS power, be sure to divide E by 2.82. A more direct way, if you also have a CURRENT sample available is to multiply E*I. I've attached a very simple oscilloscope/data save.vi
    Hope this helps. Lots of luck!
    Chutla
    Eric P. Nichols
    P.O. Box 56235
    North Pole, AK 99705
    Attachments:
    Simple_O-scope.vi ‏19 KB

  • NonLinearFitWithMaxIters does not give correct results for phase

    Dear all,
    I am trying to fit a sin cuve with NonLinearFitWithMaxIters function of labwindows, but the results given by this function are dispersed a lot (the function does not give same results).
    In fact, I have two waveforms signals v and i (of 104 points), and I  find the phase between the two signals with NonLinearFitWithMaxIters.
    The two singals v and i are  measured with an osciloscope.
    I use the following fit functions :
    v=p1*sin(wt+p2) ;
    i=p3*sin(wt+p4) ;
    I use the NonLinearFitWithMaxIters function to fit the data and to have the parameters  p1, p2, p3, p4. Then, the phase is claculated as follow phase=p4-p3.
    The problem is that the phase calculted between the two signals (v and i) is different for each run for the same conditions.
    In fact, for a same condition, I measure different times the signals v and i, and I calculate the phase p4-p3, using NonLinearFitWithMaxIters. The goal is to calculate a mean of phases for the same condition. For example, there are cases where the phase=-5 degree, and other cases when phase=12 degree.
    For ten measures of v and i, the phase calculated is different. I get a big dispersion between the phases.
    I would like to know please why I have a big difference in phases calculated with the same condition?whern using NonLinearFitWithMaxIters.
    I read that this function does not give all time the correct results, is there a way to know when the results are not corrects and when they are correct ?
    And is there any solution to find accuratly the phase between the two waveforms.
    Thank you for your precious answer.

    The pseudocode which I am using is :
    v_err=NonLinearFitWithMaxIters(array_x,array_v,v_y_fit,1252,100,sinus,v_coef,2,&v_mean_squareError);
    The fit function is
    double sinus(double x, double a[],int noef){
        return (a[0]*sin((w*x)+a[1]));
     I use the same inital coeficient of v_coef for each run :     
    v_coef[0]=0.03;
    v_coef[1]=0.2;
    These coeficient are choosen arbitrary.
    In this case, the number of points of the data is 1252 (the data of array_v).
    The number of iterations is 100.
    For the array_x, the distance between adjacents values of the array is dt=0.4*1e-9(difference between array_x[i] anv array_x[i+1]=dt=0.4*1e-9 ).
    for (i=0;i<1252;i++){
    array_x[i]=i*(0.4*1e-9);
    The means square error returnned by the function when it is complished is  small, of the order of 0,001.
    I read the help
    « From the help:
    You must pass a pointer to the nonlinear function f(x,a) along with a set of initial guess coefficients a. NonLinearFitWithMaxIters does not always give the correct answer. The correct output sometimes depends on the initial choice of a. It is very important to verify the final result.
    That means that the function cannot be used, as it does not give correct results. How
    can we check if the results are good or not ? in my case.
    I think that in my case, the function does not give correct results, but how can I check if the results are good, or not ? The mean square error is small.

  • Time stamp woes

    Attached is a code I am building to generate and acquire waveform signals and then write them to a text file with appropriate time stamps. As it stands, I have only linked up the writing portion to the acquired signals (I plan to write generated signals next); however, my time stamps are not being written alongside my voltages, as I was expecting them to. I have had some trouble with this (probably because I am an absolute novice at this), but would appreciate some expert knowledge in this area. Can anyone help me out? The code and a text file from a dummy test are attached. Thanks!
    Solved!
    Go to Solution.
    Attachments:
    Gen_Acq_Signals_BE_UARK.vi ‏119 KB
    Test 1.txt ‏565 KB

    Thank you. I used the Index Waveform Array function and that seemed to do the trick. Since I've got this thread going already, I have a few more questions.
    My end goal is to write voltages in one column and corresponding time stamps in an adjacent column.  What I get when I run a dummy test (writing 1000 samples of an 8000 Hz, 1V square wave signal, sampling at 16000 Hz) is attached as a text file. I see three columns - one for the time stamp, check, one for the supposed input voltage, check, (though not sure why all the values are the same here?) and one more for I have no idea what. It's just a column full of zeroes.
    Any insight on what the third column might be and why my voltage readings are flat in the second column? I would hazard a guess and say it is a sampling rate issue, though I can't pinpont what it is.
    On another unrelated note, in an earlier version of this program, I was using shift registers, though I don't know if I need to be doing that now that I have a different code architecture.
    Attachments:
    Test 2.txt ‏32 KB
    Gen_Acq_Signals_BE_UARK_V2.vi ‏120 KB

  • Problems with adding voltage ai channel to an existing task - LabVIEW

    I am using the Cont Acq Strain Samples (with Calibration).vi which ships with LV as the basis for a vi I need to monitor n channels of strain gauges (1/4 bridge type I) combined with another channel of a user defined voltage channel (pressure, temperature etc).  Everything works fine when I use the strain null option but I am having problems when I include the shunt cal option.  I am defining the strain channels then building the strain task using DAQmx Create Virtual channel.vi followed by the strain null opton then the shunt cal option (so far just like the example).  I then have the option to add a user defined voltage input, again using the DAQmx Create Virtual Channel.vi, (selected from a list of predefined configurations).  Everything is fine so far.  I then send the task to the sample clock vi (continuous samples) and then to the DAQmx Start Task.vi then into a while loop where the DAQmx Read (Analog 1D Wfm Nchan Nsamp) resides.  I then display (and record) the acquired waveform signals at a fairly low rate.  If I am only doing the strain null on the strain channels everything is fine but if I have included the shunt cal on the strain channels, the data becomes mixed up or is in some way corrupted that I have not put my finger on yet.  I get no error messages.  Additionally, if I don't include the additional voltage input task and only use strain channels everything is fine.  I have looked at the shunt cal vi's and cannot see any way that adding another channel to the task list can cause this problem.  I am emphasizing that the strain null and shunt cal are done prior to including the voltage ai channel to the task.  I am using an SCXI-1520 (for strain gages) and an SCXI-1121 (oher signals) with a SCXI-1600.   I have also attached the .llb for this.  I am probably missing something basic but sometimes those basic misconceptions can be the hardest to find.  Any ideas?
    Attachments:
    EJL_Cont Acq Strain Samples (with Calibration).llb ‏769 KB

    Yep, something basic.  Apparently in the Shunt Cal subvi the task is executed to acquire a set of actual strain readings which are then used to calculate a percent of theoretical strain values.  Anyway, the task is started and then stopped (not deleted).  Apparently it is taboo to add a channel to a task that has already been executed, which I think makes sense.  I'm just not sure why I didn't get an error message istead of the thing going along its merry way while spitting out garbage.  To correct this I simply broke the shunt cal vi up and executed the part that ran the task after adding the non-strain channel to the task.  Works fine now.

  • Time analysis of single pulse

    Hello all !
    I am quite confused ... I have acquired a waveform signal from a NI scope and the result is presented in the graph above, everything seems perfect.
    Using the "Transition Measurement.vi", I would like to extract the following information out of the waveform signal : start time and end time of an edge ... And here come troubles !
    The start time and the end time of an edge are the same ! (difference = 0) and the start time is even smaller than the very first element of the time array I extracted from the waveform ! (Gloups !).
    However, the transition duration, which is supposed to be calculated as I did with the start and end time, is different and seems relevant ! How is this possible ? Has anyone an idea of where I did a huge mistake ?
    Thanks for your help and have a nice week-end !
    Geoffrey
    Attachments:
    Pulse_Analysis.vi ‏20 KB

    For something as simple as the waveform you showed, I would just write some LabVIEW code that "does what I ask it to do", meaning it carries out an algorithm that I determine, can test, and can understand.  I have no idea what NI's "Transition Measurements VI" does, even after reading the Help, but it certainly is doing something complex and mysterious.
    I do know, however, why your transition time appears to be zero -- it is an artifact of the way it is being reported.  Your sampling rate seems to be 250MHz (wow!), or a point every 0.000000004 seconds.  Suppose the transition was really fast, say 2 points wide = 0.000000008 seconds.  That is less than the number of digits of precision in Start Time and End Time, so their difference could well be (rounded to) 0.
    If you did this yourself, you would presumably compute the transition in terms of array indices (i.e. "starts at point 7, finishes at point 9"), would subtract (9-7 = 2) and multiply by the sampling period (4E-9) and would get the "correct answer" (8E-9) displayed correctly.
    Bob Schor 

  • LabVIEW crashes when using the Get Scale Information.vi

    Hi,
    I'm using LV 6.02 and MAX 2.2.0.3010 on WinNT 4.0 SP6 and a AT-MIO 16E-10 DAQ-Board.
    In MAX I've created some virtual chanels and also some scales. Data-Aquisition works fine so far. But now in my application I need to know the scale coefficients to calculate the voltage of my waveform signal. For this I tried to use the Get Scale Information.vi from DAQ-Chanel Utilities palette. But everytime I run this vi with a valid scale name, LV chrashes down.
    Has anyone an idea, what to do?
    Thanks in advandce!

    Hi Jeremy,
    I couldn't test my vi on another computer, I only have one with LV installed.
    I also couldn't post the error message, because there is no one. LV is immediatly chrashing after executing this single vi. There is another Windows Application that starts to log some memory and task stuff ('Dr. Watson').
    I will attache this vi.
    In MAX I have defined three different linear scalings.
    Hope, you can help me.
    Thanks so far, Andreas
    Attachments:
    DAQ_Scale_info.vi ‏33 KB

  • Slow Log Rate Needed using cDAQ-9174

    All,
    I am logging strain data for a thermal sweep we are doing on one of our products.  The sweep occurs over a long period of time (48 hours) so I realistically want to log one signal per second.  I tried using a continuous sample and then subsetting it, but I can't get it to 1 sample per minute which is where I want it to be.  Does anyone have any suggestions on using the subset (or another method) to slow down the capturing of data for the log?
    Thanks,
    Adam

    Nice, sounds like you've figured it out.
    The 9235 does indeed have a minimum sample rate of 794 S/s when using the internal timebase.
    For future knowledge, here's what I would do to simplify things a little:
    Set your "Samples to Read" to match your "Sample Rate".
    -Since we have a limitation of nothing less than 794 S/s just set them both to 1k for simplicity.
    Create an Amplitudes and Levels step and select your strain channels as the "Export to DC Value"
    -This is a step I use for almost every test, strain especially. It will take your raw Waveform signal and turn it into a Scalar signal. Honestly not sure whats going on behind the scenes here but there's some averaging it does between the samples to read and the sample rate. Basically it works out to be a smoothing filter which yields a much cleaner signal for your final output signal.
    To determine your actual sample RATE when recording scalar signals, divide the samples to read into the sample rate.
    In our case 1000 samples to read divided by 1000 Hz is 1 S/s. Try it, I think it'll be inline with what you're looking for. I just ran a test file to be sure and with the above settings I recorded for 10 seconds and got 10 data points in my data file.
    I never save my data to log file either, I save to ASCII exclusively but the results should be the same whether you use the "Record" option or "Save to ASCII" step.
    Hope that helps!

Maybe you are looking for