Waveform distortion

Hi, 
I am using ni5640r to generate an array of signal and read it out from oscilloscope. Howevery, the waveform in the oscilloscope seems getting distortion. Can somebody help to find the reason?
Thanks
Austin
Attachments:
original.jpg ‏119 KB
distortion-1.jpg ‏106 KB
distortion-2.jpg ‏115 KB

Hi,
It would be helpful to give a little more information on your signal connections, generation rate, etc.  Something to initially check out though would be the probe compensation on your scope (adjust the compensating capacitor in the probe).  The document below shows how an overcompensated or undercompensated probe can affect a signal measurement.
http://zone.ni.com/devzone/cda/tut/p/id/4721 
Also, I would recommend to post this question on the IF-RIO discussion forum also which can be found here:
http://forums.ni.com/ni/board?board.id=ifrio 
Regards,
Message Edited by Jordan F on 07-27-2009 11:47 AM
Regards,
Jordan F
National Instruments

Similar Messages

  • Sensitivity of iMac to power fluctuations, waveform distortions

    After months of working fine, my APC UPS BE750G apparently shuts down when there is a line voltage fluctuation or waveform distortion (based on observation during very brief power fluctuations during storms, such as cause lights to dim briefly), and shuts down everything connected to the UPS. In other words, the UPS fails to go into battery backup even though the battery is fully charged.  APC is replacing the UPS under warranty.  I have discussing with APC how I might avoid with the new unit the possibility that local voltage irregularities might have damaged the UPS circuit.  One suggestion they give is:
    Use [this UPS] voltage sensitivity setting with equipment [that] is less sensitive to fluctuations in voltage or waveform distortions -- setting status to low.
    Has anybody had the problem that an APC UPS that seems to be working fine (all green lights) fails with feeble weather-related line fluctuations?
    Where do I find the range of "sensitivity to fluctuations in voltage or waveform distortions" for the iMac 3.06 GH Intel Duo?
    Has anybody adjusted the sensitivity on an APC UPS, and with what results?
    Thanks

    The Format Value function will not accept an array so put it inside a loop. Use a shift register to append all the values from the array to the string.
    Lynn

  • Waveform Distortion after Multiplication?

    Hi all,
    Newbie on the line here. I wrote a VI that obtains the reading of channels 1 and 2 from an oscilloscope via GPIB. The readings come out to be decent looking sinusoids when plotted on a waveform graph. However, after multiplying the two, the product is quite distorted and looks nothing like what you would expect. The waveform of the product contains huge jumps from negative to positive values even when outputs #1 and #2 are very continuous. I've also noticed that if you adjust the scaling on the oscilloscope to a certain value, the distortion goes away and the product looks like a normal sinusoid as well. At all other scaling values, the product waveform is distorted. Since the output wires of channels 1 and 2 (to the individual plots) are from the same branch as those going into the multiplication block, I'm wondering how this would make sense. Anyone have an idea?
    The output graphs are attached as the following. Much thanks in advance for the help.
    Attachments:
    waveforms_decent.JPG ‏96 KB
    waveforms_notgood.JPG ‏107 KB
    block.JPG ‏109 KB

    What data type are you type casting too?  If it is I8, you are getting overflow when multiplying.  I'm looking at graph time 60.  One curve has a value of 10 and the other has a value of 24 at T-60.  The product would be 240.  Yet on the power graph at T-60, the value is a negative number.  If you are type casting to an I8, I would believe the result would be negative since I8 range is from -128 to 127.  240 would fall into the negative range in this case.  If this is the case, type case to U16 to prevent overflow.
    - tbob
    Inventor of the WORM Global

  • Waveform chart X-Axis reverses itself when upper limit is edited in Absolute Time - problem may be linked to Timestamp Control?

    I've seen this problem for some time now, and it still exists in the Silver toolset Waveform Chart in LV 2012.  If the chart is in strip mode, and you edit the upper limit value to a point in the future, the X-Axis will invert (higher values/ later time to the left), to the limit of the Chart History length.  This happens whether the chart is active (logging data on a compiled executable) or even when in design mode, though it's more consistent when there's data in the chart.  It's not history length related as far as I can tell, as I set the length to 7 days of sampling @ 1-second intervals (605K), and it happens when I try to set the end time to an hour in the future with only a few minutes of data collected - suddenly the right Absolute Time becomes the time my last sample point was collected, while the left Absolute Time becomes nearly 7 days in the future (so neither end point is what I was setting it to).
    It's nice that the chart gives the option of editing the limits, but as such, it's important that they work as you would expect them to.
    Also - until this is addressed, is there a way to lock it against editing?  Until a fix is made, my workaround is to put a transparent button over the range start & end points, and a transparent flat rectangle above all other mid-range points, and pop-up a data entry form with a Timestamp control to allow editing.  Unfortunately, this doesn't work as I would expect, as typing "11 AM" in the Timestamp control over a prior value of say "10:05:34.232 AM" ends up becoming "11:05:34.232 AM" instead.  ???  Another error - or is this by design?  If it's by design, is there an option to make it behave as I would prefer (11 AM = 11:00:00.000 AM), as Excel behaves with timestamps?  I can't help but suspect this may be linked to the chart axis issue.
    Also - I just built a simplified chart modeled somewhat on my current project, and could not get this to recur.  BUT... had the strangest thing happen:  My sample data was generated using the Trig functions for Sine and Cosine, and at one point my waveforms distorted on the display, so I'm attaching that here plus the simplified chart project.
    Last - my system is LV 2012 on Win7 Pro-x64.
    Thanks!
    Erik
    Attachments:
    Chart data distorted.png ‏45 KB
    Waveform Chart Flaw.zip ‏17 KB

    I am not so sure that this is a bug, and I have not been able to reproduce this behavior that you are describing. 
    But you can lock it from editing by right-clicking on the graph and go to Properties>>Appearance>>Enabled State --> Disabled
    Also, word of advice for the future: You will get more replies from the community with shorter posts and keeping it to one question per post. Summarize what the issue is, and put the detailed documentation and instructions to reproduce in the actual VI. 
    Huntington W
    National Instruments
    Applications Engineer
    ***Don't forget to give Kudos and Accepted as Solution where it is deserved***

  • Generated Pulse waveform is distorted when I deliver the signal to the output port in the DAQmx

    Problem: Generated Pulse waveform is distorted when I deliver the signal to the output port in the DAQmx.
    Environment: Windows XP sp3 (32bit), Visual Studio 2010 sp1, NI-Measurement Studio 2010
    Device: NI - DAQmx PCI 6251
     Analog Input: 1.00MS/s multi-channel (aggregate)
     Analog Output: 2 Channel 2.00MS/s
    Reference Example: AO_ContGenVoltageWfm_IntClk / AI_ContAcqVoltageSamples_IntClk
    Generated Pulse:
    1) AO0 = Square Waveform /0-5V / 8KHz / 0.5㎲/sample / sample 50% Duty
    2) AO1 = Square Waveform /0-5V / 8KHz / 0.5㎲/sample / (Reverse Image)
    Description: I’d like to deliver the waveform stream satisfied with specified constraints to the 2 channel output port in the DAQmx. To verify accuracy of the generated waveform, I did an electrical wiring from the Analog output channel (2 channels) to the Analog Input channel (2 channels) in DAQmx. As a result of this experiment, I could get a result which has signal distortion. Since the waveform has to satisfy with both high frequency (8KHz) and very short moment interval time (△t = 0.5㎲/sample) between samples, I cannot handle some parameters of the function in the referenced VC++ example. Following formulas shows an approach to deliver the generate pulse waveform to output port satisfied with constraints.
    Analog Output Channel
     Frequency = 8,000 cycles/sec (constraint)
     Sample per Buffer = 2,000,000 = 2*106 samples/buffer
     Cycles per Buffer = 80,000 cycles/buffer
     Sample per Channel = 1,000,000 = 1*106 samples/channel
     Sample Rate  = Frequency * (Sample per Buffer / Cycle per Buffer)
                              = 8,000 * (2*106 / 80,000) = 2*106 samples / sec
     △t  = 1 sec / 2*106 samples / sec
               = 0.5 * 10-6 sec/sample (constraint)
     Buffer Cycle  = Sample Rate / Sample per Channel
                              = (2*106 samples / sec) / (1*106 samples/channel)
                              = 2 channel / sec
    Analog Input Channel
    Sample per Channel = 1,000,000 = 1*106 samples/channel
     Sample Rate  = 1 MS/s * (2 Channel) = 5 * 105 Samples / Sec
    Program Code
    AO_ContGenVoltageWfm_IntClk / AI_ContAcqVoltageSamples_IntClk (VC++ Example)
    Result: The proposed approach was implemented in the experiment environment (VS2010, MStudio2010). As shown in Figure 1, we could get the unsatisfied result. Although I intended to make a ‘square’ pulse wave, the result looks like ‘trapezoid’ pulse wave (Figure.1). However, there is other result which was undertaken with different parameter condition. It looks like the square shape not the trapezoid shape.
    Please let me know what the conditions make the problem of signal distortion. (AO0 = Green line / AO1 = Red line)
    [Figure. 1] Frequency 8000 Hz / Cycle per Buffer = 8000 Result
    [Figure. 2] Frequency 1000 Hz / Cycle per Buffer = 1000 Result
    Questions: Please let me know following questions.
    1) Is it possible to deliver the generated pulse wave satisfied with constraints (f= 8KHz), △t = 0.5㎲/sample) to the output port without distortion using PXI 6251?
    (Is it possible to solve the problem, if I use the LabView or MAX?)
    2) Are there some mistakes in the proposed approach? (H/W or S/W program)
    3) What is the meaning of the Cycle per Buffer?, It could effect to the result?

    Hi Brett Burger,
    Thanks for your reply. For your information, I have set the sampling rate as 10000 as for the sound format, I have set the bits per sample as 16 bit, the rate as 11025 and the sound quality as mono. I tried using your method by changing the sampling rate as 8K but still my program encounter the same problem.
    I wish to also create a button that is able to generate a preformatted report that contains VI documentation, data the VI returns, and report properties, such as the author, company, and number of pages only when I click on the button.  I have created this in my program, but I am not sure why is it not working. Can you help troubleshoot my program. Or do you have any samples to provide me. Hope to hear from you soon.
    Many thanks.
    Regards,
    min
    Attachments:
    Heart Sounds1.vi ‏971 KB

  • Waveforms not appearing correctly after editing in Audition

    Hi,
    When I use 'Edit Clip in Adobe Audition', once I've edited and saved the audio I go back into Premiere but the audio waveforms are totally wrong until I zoom right into the sequence.
    Also after I've come back into Premiere, the first time I try and play back the edited audio it gets massively distorted, its really annoying since I work most of the time using headphones but once I've restarted Premiere it plays back fine.
    Mac Pro Late 2013
    Processor  3.5 GHz 6-Core Intel Xeon E5
    Memory  32 GB 1866 MHz DDR3 ECC
    Graphics  AMD FirePro D500 3072 MB
    Software  OS X 10.9.5 (13F34)
    Premiere Pro CC 2014.2
    Rob.

    Thanks for the reply but it's of no help, I already know all that stuff.  What i am actually doing is creating the site using iweb & then uploading it to a server using cyberduck as the server im uploading to doesn't seem to like the way iweb uploads it :?
    When i click on your link i just get the same thing as before.  Only this time ive added a bit more to the page...
    ...See the screenshot here |>>> http://f.cl.ly/items/2Q3L0N340M0p3o2m0Q11/Screen%20shot%202011-11-12%20at%2013.4 6.57.png

  • Problems with waveform integration

    What I need to do is waveform integration from an acceleration vibration signal to a velocity or displacement signal.
    To take the signal to velocity, I first filter the acceleration signal (HPFilter, 5Hz) to remove any offset and the perform the integration.
    To probe the method, I use a signal composed by one sine WF with amplitude 1. The test consist in vary the frequency of the sine WF and evaluate the value obtained by the FFT of the integrated signal.
    For a freq of 10 Hz, i obtained a value of 0,015931 which is nearly the expected 1/(2·pi·10)=0,015916
    for a freq of 150 Hz, i obtained a value of 9,8130e-4 instead of the expected 1/(2·pi·150)=1,061e-3
    for a freq of 450 Hz, i obtained a value of 7,91922e-5 instead of the expected 1/(2·pi·450)=3,537e-4
    as you can see, higher the frequency, higher the deviation from the expected value. i am triyng to get this solve, but in this moment i don't know why this phenomenon occur.
    any help will be well recibed.
    CJMV

    OK, that's a really good point, i am agree with it.
    do you have made a study of the relationship between the sampling frequency and the maximum frequency of the signal?.
    i am going to make it anyway, but to compare...
    another point related to the waveform integration:
    i rote in an application note that it is recommended to use a FIR filter instead of an IIR because it has less phase distorition of the signal. however, if you ask for the filter information and plot the "phase H(w)", you will se a big distortion in phase introduced by the filter.
    in the other hand, if you do the same thing with an IIR filter, you will find that the phase distortion is less.
    conclusion: i am using an IIR filter with a Fc of 5 Hz.
    greetings
    CJMV

  • Is it possible to display a waveform with fixed length and fixed starting point?

    Hi,
    I am using DAQ assitant to acquire voltage and current measurment of my device. The voltage is a pure sine wave and current is a periodic waveform with phase difference and distortion. I use waveform chart to display the waveform of voltage and current waveform in seperate charts and they work fine.But the waveforms look like moving to the right all the time, in another way ,the phase is always shifting. Now i want to display the waveforms of both with fixed length (say 2 cycle) and also in a same chart. Apart from that, i also want to display the voltage waveform starting form 0 degree (a fixed point )rather than moving all the time. In this case, i can observe the angle difference between voltage and current. Is there any method to achieve this purpose?
    Many Thanks,
    Hao

    Hao,
    first of all, you are using a chart which has three options for updates if the chart is "full":
    Strip chart (default)
    Scope chart
    Sweep chart
    These are called "update mode". Test the modes yourself.
    Also you have to know that you will not likely have an integer number of periods of your signal in the display of the chart. Therefore, a continuous signal will "move" the graph from update to update.
    You can implement some algorithm to discard data to maintain a static "trigger" level for display, but as stated, it will leave gaps in the signal. These gaps are not a concern unless you use the displayed signal for analysis (e.g. FFT).
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • IMovie volume meter shows much red peaking but audio waveform shows none

    When I play my movie, the volume meter in iMovie shows peaking in the red (distortion) all over the place, but in the audio waveform there is not one place where it peaks into the red (or even the yellow).  The whole wave is very close to the 0db line.  It sounds great when I play it and still great after I export it.  But, once I upload it to YouTube, there is awful distortion all over the place just like the volume meter in iMovie indicates.  I can't find any place to turn down output so the volume meter doesn't peak and I don't want to turn my audio waveform down below 0db and lose all of my volume.  I already tried clicking on the Normalize Clip Volume in Audio Adjustments and that didn't seem to change anything, still peaking like crazy in the volume meter.  What can I do to get rid of the red peaking and distortion?
    Here's a little more info for anyone who can help with my problem.  I am using iMovie 11 on an i7 iMac with Lion.  I turned the audio of my video all the way down and imported another audio that had been compressed in Pro Tools to show no red peaks whatsoever.  Can someone please help?
    Thanks,
    Loribella

    When I play my movie, the volume meter in iMovie shows peaking in the red (distortion) all over the place, but in the audio waveform there is not one place where it peaks into the red (or even the yellow).  The whole wave is very close to the 0db line.  It sounds great when I play it and still great after I export it.  But, once I upload it to YouTube, there is awful distortion all over the place just like the volume meter in iMovie indicates.  I can't find any place to turn down output so the volume meter doesn't peak and I don't want to turn my audio waveform down below 0db and lose all of my volume.  I already tried clicking on the Normalize Clip Volume in Audio Adjustments and that didn't seem to change anything, still peaking like crazy in the volume meter.  What can I do to get rid of the red peaking and distortion?
    Here's a little more info for anyone who can help with my problem.  I am using iMovie 11 on an i7 iMac with Lion.  I turned the audio of my video all the way down and imported another audio that had been compressed in Pro Tools to show no red peaks whatsoever.  Can someone please help?
    Thanks,
    Loribella

  • Why doesn't adding Waveforms with compound arithmetic work?

    Hello everyone,
    I have a program that adds up multiple waveforms and displays them all together as one distorted curve in a single waveform graph.    
    I've had to add them all together using simple add functions, since the compound arithmetic block doesn't work. 
    Why would this be the case? Isn't it essentially just doing multiple additions in a convinient way? I've attached a VI with a simple example to show what I mean.
    fr00tcrunch
    Solved!
    Go to Solution.
    Attachments:
    simple waveform addition.vi ‏33 KB

    1.  No need to wire up the N.  The autoindexing will tell how many iterations to perform.
    2. Use Delete From Array to remove one of the waveforms from the array.  Initialize the shift register with the deleted portion.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    waveform addition shift registers_BD.png ‏18 KB

  • Analog distortion and digital's lack thereof

    Djokes wrote:
    "analog is distorting - without a doubt - tape decks were like compressors in a way and that is what being simulated by e.g. mcdsp or crane song... However, using no eq or tape deck - the distortion is minimal.... but analog is distorting what is actually there - digital is not - cuz it aint "Seeing" it all....so the distortion will be false...And we dont need to start a discussion about that we like the distortion that analog brings - do we now?"
    Let's start one though shall we.
    First I fail to see how digital isn't seeing it all. Unless you're talking about inaudible frequency ranges which I will admit may be more important than not. Digital lays it all on the table. No compensation, no analogesque minimal distortion. So this would be a true picture of the audio right.
    I mean if you're talking about the nuances of environment then digital still captures that stuff as it's mic'd. The thing digital doesn't capture is the distortion signature of the analog device. Now if you've been trained over decades to appreciate that distortion, hearing digital which lacks that of course would lead one to say it sounds cold.
    The question is whether that distortion is the signature of quality music, or whether a talented producer/mixing engineer, can create a signature digital distortion which is either as good or better than analog. I'm a firm believer in the latter though I will admit we haven't surpassed analog yet. But I also believe the answer isn't in trying to emulate analog, but rather using the digital for its own strengths to create something all together new.

    then we'd have to start a new topic called what is accurate:-))) Anyway, listening to a guy playing an acoustic guitar live would by many be considered a true source of audio.... So if we have a mic that can actually capture the movements the gas molecules in air makes and translate those correctly into currents - then we can assume that at least the mic is giving us the true source...If the mic "Which it cant" could translate the audio one 100 percent accurately into current the mixing board would only "see" current actually VOLTS and treat those as such...if we were to apply no eqs and no effect at all - the next step of inaccuracy would be your speakers which would have to translate the currents and make the membranes vibrate - which would then again compress and rare-fact the gas molecules in the air - thus making u able to hear it....
    The mixing board is treating the current coming from the mic more or less constantly the way it came from the mic - the digital converters are only treating what comes from the mic 44100 times per second - this is why analog is called analog and is seen as treating the signal the way we hear it....
    What do we hear - and what is there and what is our brain doing.....
    the air that is being put in motion by a source being a true source or speakers is hitting our ears with the speed of sound - our ears are like a mic we a picking up differences in the molecules and our brain is translating it into what we call sound... Digital audio is actually exploiting our imperfections this is why we only need to sample 22050 times per second (Hearing Threshold) - the actually reason for sampling at twice the freq of hearing is because of the problem that causes the low pass filters to be there in the first place...(why the lowpass filters are needed would be too long to desribe)
    The reason for 44100 is the hearing threshold of 22050 Hz - I dont really no anyone above thirty that can hear above 18000 Hz and make use of what he is hearing - so why not sample at 16000HZ - or 32000 Hz - the reason is mirroring cuz of imperfect sinuses - and facts of a physical law : There can be no 90° Low pass filter....
    An EQ works with phase displacement - what actually happens when u cut 4 KZ by 4 db. is that the original signal is being phase displaced at the freq. u enter - what happens is that those freq.'s are being less amplified - There are physical laws about waveforms e.g. when two waves layer each other at the same maxima you will gain 6 db aka doubling the amplitude...when two waves hit each other where one has maxima and the other minimum you'll achieve complete cancelation...
    what really happens when u talk analog dist. is that the sinus(Wavesform) is being distorted so it looks more like a triangular waves form.... This gives us a feeling of loudness and warmth...
    If u want your digital mix to sound loud - take the "beautiful" sinus and turn it into a triangle by changing the gain - after that you will see the wave form being cut at the top and bottom but it aint sounding bad - only louder - Unless u overdo it:-))))
    There's a lot more to it than what I just described - and it is quite interesting - If u go to amazon.com there are two kind a books about his stuff...1) The book describing it fragmental like I just did and Books that really describing it according to physics....Latter mentioned are VERY dry but if u take the time and have the interest you will find them real helpful and it will also change the way you "Look" at audio and mixing and recording
    This is a huge topic consuming man since the day of recording audio...
    analog |ˈanlˌôg; -ˌäg| (also analogue) noun a person or thing seen as comparable to another : the idea that the fertilized egg contains a miniature analog of every adult structure. • Chemistry a compound with a molecular structure closely similar to that of another.
    G5 Dual 2.3 GHZ   Mac OS X (10.4.5)   PT HD3 / 4x ApoGee Rosetta 800 / 8GB RAM / 1TB HD
    G5 Dual 2.3 GHZ   Mac OS X (10.4.5)   PT HD3 / 4x ApoGee Rosetta 800 / 8GB RAM / 1TB HD
    G5 Dual 2.3 GHZ   Mac OS X (10.4.5)   PT HD3 / 4x ApoGee Rosetta 800 / 8GB RAM / 1TB HD
    G5 Dual 2.3 GHZ   Mac OS X (10.4.5)   PT HD3 / 4x ApoGee Rosetta 800 / 8GB RAM / 1TB HD

  • Fade causes distortion

    I'm trying out Soundbooth and find that when I add a fade to a track, it creates a high pitched distortion.  The harder the fade, the worse the distortion.  At first I thought it was the Dell soundcard, but it does the same when I use an external audio interface.
    For the internal card, I used the WDM driver, and for the external enterface I used the ASIO driver.
    Any ideas?

    Can you be more specific as to what you're doing and what this sound is?  For example, are you just using the fade in/out controls on the waveform to create a fade?  Do you just hear this artifact in playback mode, or is it actually being written to the file and audible if you play the new file in a different application?  Is the sound only audible for the period of the fade, or does it affect the entire file?

  • Adobe Premiere Pro audio distortion

    Hi,
    I have Adobe Premiere CS2 and some audio distortion problems. I captured a footage from my Sony DV Digital-8 camcorder. It's a standard DV footage with 16-bit PCM audio. When audio is nearly-peaking (not peaking), it gets distorted in Premiere, but doesn't in other applications (VirtualDub, Media Player Classic, Windows Media Player, mplayer). I'm sure this is because an internal conversion (called conforming) from 16-bit PCM to 32-bit floating point PCM. Project Setting of the Audio Track is 48000Hz 32bit floating point - Stereo for a DV project, despites DV standard supports 16 or 12 bit Audio tracks. In user manual they write the following:
    "For maximum editing performance and audio quality, Adobe Premiere Pro processes each audio channel, including audio channels in video clips, as 32-bit floating-point data at the project’s sample rate. To do this it must conform certain types of audio to match the 32-bit format and the project sample rate."
    Well, if I'm a professional video editor, I can (and have the right to) decide whether which sample rate and bit-depth are the best for me, but they don't let me set audio other than the default, there is no option like bit-depth available not even when creating a NEW project. No problem if Adobe's genius developers thought 32-bit mixing provides much more quality, but if they forget to implement 16-bit to 32-bit internal conversion correctly, please let me decide to use at least a fallback method.
    Thanks for your help
    stringZ

    Just looking at the waveform confirmed my initial reaction: Way, way
    oversteered. Everything is clipped and your best approach is to
    reshoot. That some players can handle this seriously mistreated audio
    may be caused by audio limiters. They should, otherwise you would blow
    up your speakers.
    Well, you can say that and if this sample was a music maximized with compressors, it would distort too. Sony Digital-8 camcorder has an internal limiter, so only soft-peaking can occur, but no serious distortion. Media Players (eg. mplayer) don't have limiters unless you specify an external limiter filter. I've already told this isn't the clip you want me to re-shoot. This is a sample I created to show you the problem. I sent this to one of my friends who has Ulead MediaStudio Pro 7.0. It doesn't distort there either. Have you tried loading it to an audio editor like Audacity or Adobe Audition? You can see no hard-peaking or distortion if you zoom in the waveform.
    Waveform at 00:00:00:21. No distortion, only high levels can be seen on this waveform.
    Waveform after 00:00:00:21. Soft peaking in right (bottom) channel.
    I'd appreciate if someone took this seriously instead of repeating "reshoot it" and "high levels".

  • Why is the DAQ outputs distorted?

    I am trying to output two voltage sine waves(90 degrees out of phase) from my DAQ card. I use a for loop to create two arrays of 1000 samples each. I then send each of these arrays to my DAQ card for sampling of 5000S/s. One of the outputs(measured on an oscilliscope is perfect while the other clips and is distorted. Why am i not getting twp perfectly good sign waves?Is this a hardware issue?
    Please find attached my LabView file.
    Thanks,
    Gilly
    Attachments:
    Sine_forloop_nearlydone.vi ‏91 KB

    I haven't looked at your vi yet, don't have 8.6 on this machine (please identify the version when attaching vi's), but do have some questions. Are they both set to the same amplitude? Are you looking at them individually (re: single channel scope) or both channels at the same time? Are they both connected to the DAQ inputs in the same way. Usually, if there isn't an actual hardware problem with the outputs, distortion and clipping are caused by; 1) trying to output a waveform of an amplitude greater than allowed, 2) trying to drive a seriously mismatched load. In the later category hooking up to the DAQ incorrectly might cause a problem.
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • Output signal distorted

    I have a pretty large VI which outputs high frequency signals to control some hardware and then takes an input from that hardware for image rendering.  My problem is that whenever I activate the imaging portion of my VI, the output signals become slightly distorted.  For example, a 1Hz sawtooth output doesn't ramp up smoothly but becomes slightly staggered.  This also messes up the frequency of the signal.  Does LabVIEW have problems outputting signals properly when another portion of the VI has a lot of cpu intensive processing?
    I am using the "Simulate Signal" block to create a 1 Hz sawtooth wave, 128 Hz triangle wave, and a 65,536 Hz square wave and outputting these signals through DAQAssist.

    Hi Laura,
    I went ahead and changed the 65,536 Hz square wave signal as the first input, but the three signals still do not output simultaneously.  Although the signals slightly change as I vary the Samples per Buffer and Cycles per Buffer, I can't seem to get a combination that is capable of outputting these two low frequency signals with my high frequency signal.  I would either get an error message about "DAC conversion attempted before data to be converted was available," or there would be no error message but the signal output didn't match the given specifications.  Attached is the vi with the modifications.  Presently, there is no error message with the given sampling information, however there is no square wave output at all, and the sawtooth waveform only shows a partial amount of the signal.  I had to make some slight modifications to the Waveform Buffer Generation vi in order to allow me to change phase, duty cycle, and offset of my square wave.  I don't think this would have any effect on the Cont Gen Voltage Wfm vi though.  Any ideas what's going on?
    Thanks a bunch,
    Anthony
    Attachments:
    Cont Gen Voltage Wfm-Int Clk multiple waveforms2.vi ‏121 KB
    Waveform Buffer Generation.vi ‏79 KB

Maybe you are looking for