Samples/sec == Hz?

Was wondering if Samples/sec is equal to Rate field (Hz)
in MAX (Test Panels).

A Hertz is any_periodic_cycle/sec.  So samples per second is Hz.
-Devin
I got 99 problems but 8.6 ain't one.

Similar Messages

  • FS: PCI-6024E DAQ card for sale - 200,000 samples/sec - 16 inputs - GREAT DEAL!

    Hello everyone,
    I have a PCI-6024E DAQ card for sale.
    If you want more information, please contact me - I have the following items, selling as a set:
    * PCI-6024E Multi I/O and PCI DAQ card, low-Cost E Series Multifunction DAQ, 12-Bit, 200 kS/s, 16 Analog Inputs ($595/list)
    Thanks,
    Dave M
    [email protected]

    I'm sorry, I should have put in the post that I will entertain offers. Please make an offer.
    Regards,
    Dave M.
    "DaveM" wrote in message news:RJErb.7470$[email protected]..
    > Hello everyone,
    >
    > I have a PCI-6024E DAQ card for sale.
    >
    > I am sorry if this is off topic - if someone knows what forum to sell NI hardware please let me know.
    >
    > If you want more information, please contact me - I have the following items, selling as a set:
    >
    > * PCI-6024E Multi I/O and PCI DAQ card, low-Cost E Series Multifunction DAQ, 12-Bit, 200 kS/s, 16 Analog Inputs ($595/list)
    >
    > * CB-68LPR (184700B) connector block ($95/list)
    >
    > * 182482A-01 type R6868 1meter cable ($40/list)
    >
    > Thanks,
    >
    > Dave M
    > [email protected]

  • Number of sample required

    Hi to all the expertise,
    In my labview program 6i, I design a scan rate and number of scans to acquire. Now I want to read 100 samples, which mean on my graph, I want to see 100 samples of waveform. How should I configure it, so that I"m able to achieve this.
    Kindly advise.
    Thanks

    Simple as it
    If it is a finite acq, keep scan rate as 100 samples /sec and number of samples to read as 100 samples
    does this answer your query ?
    if not, get back with more details
    regards
    Dev

  • Unable to acquire 4 channels (2048 samples to be read).PCI 7030/6040E

    I am using the PCI 7030/6040E DaQ card (Labview 6.1, Windows 2000,Pentium).I want to acquire 4 channel data (2048 samples from each channel)at 6.5kS/s Sampling Rate.When I do this using AI Acquire Waveforms.vi the program gets stuck.When I use continuous buffered acquisition using intermediate VIs, then data over write error occurs.(I am taking FFT of the data and plotting the data as four Waterfalls using Intensity Chart.).It says that you are not reading data from the buffer as fast and so overwriting occurs.How can I solve this problem.Can I get data at this rate using this card ?If so,what should be the buffer size etc? (I want continuous acquisition).

    thanks.i am attaching the code,Buffered.vi and non_buffered_ni.vi.both does not work.the second one gets stuck and the first one gives some overwrite error.Actually what I want is the first one,buffered.vi,because I want continuous acquisition.the second one i have tried to see whether it will work.as i have told,i want a sampling rate of 6500 samples/sec and i want to take 2048 samples from each channel to take FFT and plot waterfall.hope this gives a clear picture.
    Attachments:
    Buffered.vi ‏183 KB
    non_buffered_ni.vi ‏148 KB

  • Count the # of time the instr excutes in 1 sec.

    Dear all,
    I am currently working on part of a sweeping carrier program and now I need to determine the # of samples generated in 1sec. I tried to connect a 1 sec elapsed timer to the loop condition of the while loop and also a indicator to monitor the n value of the count. The result I obtained is quite low (about 35377 samples/sec). I need more than 781250sample/sec as I need to feed into my AWG hardware later.
    To basically monitor the # of sample generate per sec, I wonder if I had connect the timer and the indicator correctly. If my program in deep can only generate 35377 sample/sec, please advice me how I can modify my program to generate more that 781250 sample / sec.
    I have attached my program as below.
    For you info, I am using the following sets of parameter:
    Sweep rate = 50
    Upper limit of span = 200
    Lower limit of span = -200
    start frequency = 0
    Thanks alot and please help.     
    Zhi Hong
    Message Edited by Zhi Hong on 07-16-2008 10:42 PM
    Attachments:
    Sweep rate and Span 52.vi ‏97 KB

    I ran this on a fairly old machine and got about 8800 iterations.
    I would recommend that you rewrite your VI so that the values are calculated using LabVIEW primitive math functions rather than using the Formula node.  I believe the Formula node is probably an order of magnitude slower.
    You can probably simplify the code a bit.  The only difference between left and right cases is the choice of either addition or subtraction in the first line.  You could merge the two cases.  Just carry a boolean, or an integer of +1 or -1 in a shift register that you can multiply to the second half of that formula and add to the first half.
    Also, if you can figure out how many samples you will be generating ahead of time, initialize an array of at least that large and use replace array subset in the loop rather than autoindexing at the tunnel.   This may save processor time as the arrays grow and are moved around in memory at the auto indexing tunnels.
    Are these calculations that have to be done on a continual basis to feed the waveform generator?  Or will they be done once at the beginning, or occasionally during operation?  Is it really critical that hundreds of thousands of elements of an array are generated in less than a second?
    Message Edited by Ravens Fan on 07-17-2008 12:25 AM

  • PCI-MIO-16E4: simultaneous or sequential sampling of channels??

    I am using a PCI-MIO-16E4 board to sample
    pressure transducers on four different channels. I can't seem to work
    out from the literature whether this board samples all four channels
    together, each scan, or if it samples them one at a time.
    I am hoping it samples them simultaneously. Although the Labview
    measurements manual definitely says "some" DAQs sample simultaneously,
    it does not go on to explain which ones!
    If it cannot sample simultaneously, then I guess the shortest period of
    time between each of the four samples, is dictated by the 250,000
    samples/sec  rate of the board, ie 0.004 ms?? This amount of delay
    might be ok, but I'd much rather be sampling all four at the same time. Anyone care to enlighten me?!
    Thanks
    Theo

    Matt, that's great, cheers for the explanation. I assume my calcs above
    are right, so yes, the delay is very small. I'm actually measuring
    variation of in-cylinder pressure in a four cyl diesel engine. The four
    channels are read every half degree of engine rotation - you can
    imagine that at any appreciable speed half a degree of rotation doesn't
    take a very long time! So, short as the delay may be, i estimate that
    the period of time to read four channels is over 15% of the scan
    period, even at a fairly low speed ~ 1000rpm.  I'd really like all
    four channels to be read at exactly the beginning of each scan, but
    since that clearly requires a different board, I don't think that's a
    short term option!
    On a related note, I seem to be having a very strange problem with how
    the four channels are being written to the buffer - if I use the AI
    Read vi to output what is written to the buffer each revolution, and
    then output that to a spreadsheet or similar, I'd expect to see four
    columns, one for each channel, each column being 720 elements long, for
    the 720 half-degree pressure readings in each rev. BUT, for some
    unknown reason, I am presented with 16 columns, with the first channel
    data being duplicated four times, then the second, then the third and
    fourth. However many of my channels I specify to read, the output is
    always duplicated four times over, so reading three channels would
    result in a 12 column output for example. I've checked the shaft
    encoder output with scope and it's perfectly in line with the engine
    (thought perhaps it was giving four times too many pulses per rev), and
    I've experienced the same problem if I
    use one of my own vi's or one of the NI examples, so I begin to wonder
    if
    this is something more deeply rooted with the DAQ? After all, what I'm
    seeing in the output of the AI Read is just what is being taken from
    the DAQ and written to the computer buffer, so it's hard to
    believe this is a Labview/software problem...
    Any thoughts??
    thanks very much
    Theo

  • Analog out DMA performanc​e problems

    I'm working on an open-source driver for m-series and e-series boards (http://www.comedi.org). I've discoved some performance problems doing dma to analog outputs that I can't resolve. In summary, dma transfers to the analog output of a PXI-6281 in a pxi crate being controlled through a mxi-4 connection (pxi-pci8336) are VERY slow. I'm talking 250k samples/sec slow. That's the maximum speed the dma controller can fill the board's analog output fifo from host memory. I've also got an older PXI-6713 in the same crate, and dma transfers to it are about 15 times faster (about 3.5M samples/sec). I did notice that clearing the dma burst enable bit in the mite chips channel control register caused the 6713 to slow way down to something comparable to the 6281 (about 500k samples/sec). Setting or clearing the burst enable bit had no effect on the speed of the 6289. Is there some special mojo that needs to be done to enable burst transfers on the 6289? Also, even the relatively speedy 6713 does dma transfers much slower than it should, since the pxi-pci8336 advertises 80MB/sec sustained transfer rates over mxi4. Can you provide any insight into this matter? I've already looked through the ddk, and a register level document describing the mite chip, and example code which had chipobjects for the mite and an analog input example.
    By the way, dma transfers for analog input on the 6281 weren't as bad, I didn't measure the transfer time, but I was at least able to do input at 500k samples/sec without fifo overruns.
    I'll post more detailed performance measurements in a subsequent post, and include measurements for a couple other similar pci boards (a pci-6289 and pci-6711). In case you're wondering, neither of the pci boards get anywhere close to the bandwidth provided by the pci bus, but they're not as spectacularly bad as the pxi-6281.

    Here are my measurements:
    PCI-6711, tested on 1.4GHz Pentium 4:
    5.2 to 5.3 milliseconds to load fifo to half-full using dma. 0.9 to 1.0 microseconds to write to a 16-bit register. 1.9 to 2.1 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect.
    PXI-6713, tested on 3.2GHz Pentium D:
    2.2 to 2.4 milliseconds to load fifo to half-full using dma. 0.5 to 0.7 microseconds to write to a 16-bit register. 5 to 7 microseconds to read from a 16-bit register. Turning off the mite's burst enable bit causes the dma fifo load time to increase to 16 to 17 milliseconds.
    PCI-6289, tested on 3GHz Pentium 4:
    2.0 to 2.2 milliseconds to load fifo to half-full using dma. 0.4 to 0.6 microseconds to write to a 16-bit register. About 1.2 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 2.1MHz before the board's fifo started to underrun.
    PXI-6281, tested on 3.2GHz Pentium D:
    18 to 19 milliseconds to load fifo to half-full using dma. 0.3 to 0.4 microseconds to write to a 16-bit register. 4 to 6 microseconds to read from a 16-bit register. The mite's burst enable bit has no effect. I could do streaming analog output on 1 channel with an update rate of about 250kHz before the board's fifo started to underrun.
    Notes: the 671x boards have a 16k sample ao fifo, the 628x boards have 8k.
    The 4 to 7 microseconds times to read a register on the PXI boards seems large too, is that normal overhead for going over the mxi-4 connection?
    I wasn't doing anything else intensive on the pci bus during these tests. For what it's worth, according to pci specs the two pci boards should be able to dma their analog output fifos to half full in less than 150 microseconds.

  • USB-6009 pulse train generation with digital output....

    Hello!
    I've bought a new USB NI-Card (USB-6009) and now I'm trying to adopt an old vi that uses traditional DAQ drivers. I wrote that vi for a PCI NI-Card (PCI-6024E), which has two counters to generate two pulse trains simultaneously. Now I've only one counter and that's why I'm searching for a good way to create pulse trains using a digital output! The pulse trains are both ranging between 100 Hz and 100 kHz.
    I'm sure somebody has an idea how I can solve the probem in the best way
    Kind regards,
    Peter

    You can't do it with this low cost board. Both digital and analog outputs are software timed only. The analog out is rated at only 150  samples/sec and the digital is about the same. You can't even use one of the counters because it is not a hardware timed counter output. It is an event counter only as an input.

  • IPhone 5s BLE updata rate

    This is a copy of a post I just put on TI's E2E forum...
    I just got an iPhone 5s so I could test compatibility with a custom app/custom BLE circuit...
    Note: after fixing some iOS7 issues, everything works fine on ALL BLE capable iPad and iPhone devices from Apple.
    Again, all devices (except iPhone 5s) work with iOS5, iOS6 and iOS7. (just to be completely accurate, iPhone 5 and iPad 4 and mini's were not tested with iOS5)
    My apps are using multiple BLE devices connected simultaneously.
    When testing with the new iPhone 5s (note also that I tested with a second iPhone 5s just to make sure it was not my phone specific), the sampled data seemed to be rather "sluggish" and in fact would continue to be collected by the iPhone 5s even after the input from the transmitting devices ceased. (apparently caching)
    Another note: with 3 devices transmitting simultaneously, I can capture sample rates up to 50 samples/sec for each transmitting device with the iPhone 5 and the iPad 4, and when I increase the sample rate on the transmitting devices (tested up to 1000 samples/sec) the iOS devices "topped out" at appx 50 samples/sec which is currently acceptable for my current application so I set the BLE transmitting devices to 50 samples/sec to conserve battery life.  The indicators on the app which are affected by the sample rate are responsive an (snappy) as the data changes and when sending more than 50 samples/sec, the "extra" data is apparently being disregarded (dropped) as the "real time" indicators are using the most recently transmitted data with no apparent "caching".
    Note: apps were compiled with xcode5 in both iOS6 and iOS7 - no difference in testing.
    When testing with only one device, transmitting at 50 samp/sec, if I continue transmitting long enough, I can get the BLE connection to fail (over running a buffer?).  I have to lower the transmission rate down to 20 samples/sec or lower to let the iPhone 5s "keep up".  this is rather disappointing as the iPhone 5s (64 bit) is supposed to be so much faster and I fear that the new iPad to be announced this month may have the same issues if it follows the iPhone 5s architecture.
    I downloaded the current iOS keyfob app from TI's wiki (remember to change  type:CBCharacteristicWriteWithoutResponse]; to  type:CBCharacteristicWriteWithResponse];), and got 2 of my TI keyfobs and programmed them with the latest "keyfob" code (BLE-CC254x-1.3.2 stack), one with no changes (ACCEL_READ_PERIOD  50) 20 samples/sec, and the other one changed to (ACCEL_READ_PERIOD  20) 50 samples/sec. - you can even change to a faster sample rate to see a more dramatic depiction of what I'm describing.
    Connect each of the keyfobs in turn and shake them and watch the "bars" and you can see the "delayed" reaction - (cached).  The 20 samples/second keyfob is more responsive but you can still see a small amount of delay even down to 20 samples/sec.
    I also got out 2 of my "SensorTags" and repeated the same tests with the same results just to see if there was something in the older keyfob code that might be causing this vs. the newer styled coding the SensorTags utilize, but again, the results are the same which points me back the the iPhone 5s.
    I can't find anything from Apple's side about the way the 5s may be different but it appears that the 5s caches BLE data and all the rest of Apple's previous devices apparently do not.  Also, I could not find anything in the iOS or xcode side that gives any options for turning caching on or off (if that is indeed what is really happening), but even if that were the case, the exact same code (iOS) can be run on all devices with only the 5s being different.
    Any thoughts out there?
    I think that this should be important to anyone out there who plans on collecting "real time" data and wants to be able to support the latest hardware...
    I'd be happy to give more detail on any of the tests I've done if you are unable to replicate this.
    Thanks,
    evw

    I just got the iPad Air and can confirm that it behaves identical to the iPhone 5s as described above...
    Is anyone out there concerned about this?
    I've submitted bug reports, informed DTS, and posted to TI's E2E, but still nothing...

  • Premiere Pro with Blackmagic Intensity Pro; Importer

    I'm using a Blackmagic Intensity Pro with Premiere Pro. I don't Import anything with it directly, i just use it to preview my timeline on a seperate monitor.
    The issue I have is when I try to import a AVI-file to my project
    before I had the Intensity Pro, Premiere would input the file as a Microsoft AVI and it worked perfectly.
    But now the exact same file is imported as a Blackmagic AVI-file. now I am experiencing green bars in my video and the color information is totally screwed up.
    So my question is: can I deactivate or change the Import-settings of Premiere, without to remove the Card completely (turn PC off, remove card, turn back on...)?

    Ok, my workflow is:
    I'm importing a video file from my hard drive via double-click or drag'n'drop. i don't capture from a video source with the intensity pro!
    the specs of the video file according to premiere are:
    Type: Blackmagic AVI File
    File Size: 1,7 GB
    Image Size: 1024 x 768
    Frame Rate: 18,00
    Source Audio Format: 48000 Hz - 16 bit - Stereo
    Project Audio Format: 48000 Hz - 32 bit floating point - Stereo
    Total Duration: 00;15;18;28
    Average Data Rate: 1,9 MB / second
    Pixel Aspect Ratio: 1,0
    ############### BMD Importer Supplemental ###############
    AVI File Details:
    Timecode:
    Reel name: ''
    User timecode:
    Alternative reel name: ''
    Contains 1 video track(s) and 1 audio track(s).
    Interleave: 1 : 1.00 (conforming required)
    Video Track 1:
    Size is 1.49G bytes (average frame = 94.18K bytes)
    There are 16541 keyframes.
    Frame rate is: 18.0000 fps
    Frame size is: 1024 x 768
    Video format is 'MJPG' 422 8-bit ITU-R 709
    Audio Track 2:
    Size is: 168.2637M bytes
    Rate is: 48000 samples/sec
    Sample size is: 16 bits
    I've tried importing the video file at a friend's pc with premiere pro 5.5 and there the file works just fine. so I'm assuming that it has to do with the intensity pro...
    It has the same properties execpt under Type there is "Microsoft AVI File" and everything beneath Pixel Aspect Ratio isn't there.
    Thanks

  • Why do i get crashes when setting chart history length

    I am trying to figure out how to get my chart to eat up less memory. At the moment it is capturing from 4 channels at 44100 samples/sec. It is a continuous capture and after a few minutes, the computer just runs out of memory and can no longer keep up with the capture, at which point the Vi quits. Opening up task manager, I noticed that as I capture, the memory gets chewed up really quickly, but when I stop my VI the used memory is not reclaimed. Even if I set the chart history to an empty array when the VI stops. I tried setting the chart history to a smaller number but whenever I set it below 512 the whole computer crashes.

    I have seen similar problems. I created a simple VI to test this and am continously getting LabView crashes when I try and write an intensity chart history to an array.
    First frame of the vi sequence generates a 100 random number array for the intensity chart display. The next frame in the sequence has the property node for chart history (of the intensity chart)set to read mode and it is wired to a two deminsional array.
    So, I generate psuedo chart data and then try to recover the history into an array. It crashes every time. No matter where I set the chart history length. I attempted to do as you had previously suggested and store the data to a file. Same response.... Crashes LabView. Local and online help have not been very hel
    pful. Will search the hardcopy literature now.
    thanx
    WLS
    Attached VI
    Attachments:
    history_grabber.vi ‏22 KB

  • Ni 5122: Use of functions that manipulate attributes in NISCOPE

    HI, all
    I would like to first thank  Alan L for responding to my last message. It was helpful.
    I am currently using ni 5122 in sampling data sets and EACH set consists of  400 triggered records
    and each record contains 1024 points (So this 1024 X 400 matrix will constitute a single image).
    The sampling rate is 33 Mhz(There is a reason for choosing this sampling frequency, plz do not
    suggest me to increase the sampling frequency as a solution).
    Since the trigger occurs at 10 KHz, it will take 40 milliseconds to acquire
    a data set which corresponds to a single frame of image.
    I am trying to configure my program ( I am using VC++) such a way that I fetch the data
    from on-board memory of digitizer to main memory of host computer and  perform DSP
    on each triggered record while sampling rather than waiting for the entire data set (1024 X 400) to be collected.
    The frequency of the trigger signal is 10 kHz, meaning that I have 100 usec for each triggered
    record. Since I am using approximately 31 usec to sample the data, I have about 69 usec of idling
    period bewteen each triggered record. So, I have attempted to utilize those idling period.
    I have looked at "Acquiring data continuously" section of  "High Speed Digitizer Help" manual.
    From there, I found out that I can fetch triggered records while sampling is still going on.
    The manual suggests me to play with the following attributes.
    NISCOPE_ATTR_FETCH_RECORD_NUMBER
    NISCOPE_ATTR_FETCH_NUM_RECORDS
    with the family of
    niScope_SetAttributeXXX and niScope_GetAttributeXXX functions.
    I have attempted to change value of those attributes but
    got the following error.
    "The channel or repeated capability  name is not allowed." This error also occured
    when I attempted to just READ! (The functions I mentioned above appear immediately
    before niScope_InitiateAcquisition function in my prog.)
    I have also looked at the accompanying c example codes to remedy this,
    but found a strange thing in the code. Within the example which uses
    niScope_SetAttributeViInt32, the parameter channelList is set to VI_NULL
    instead of "0", "1" or "0,1".  Why?
    As I mentioned earlier, I can get a single frame of image every 40 millisec
    ( 25 frame/sec), if everything works as I planned.  Without fetching portion of
    codes, my program currently generates about 20 frame/sec but when I include
    the fetching codes, the frame rate decreases to 8 frame/sec. 
    If anybody has a better idea of reducing fetching time than the one I am using,
    please help me.
    Godspeed
    joon

    I would like to thank you (Brooks W.)  for the reply.
    I think I have stated that " 'my program'  generates 20 fps if fetching portion of code is omitted."  As I have mentioned earlier I am developing own app. S/W using VC++.
    I am already using niScope_FetchBinary16 which you have suggested in your reply.
    Here is a full disclosure of issues I am experiencing when fetching triggered records from 5122. I initially wrote a simple code which runs in int main() function and profiled the  time used to fetch data using niScope_FetchBinary16. The rate was  23.885714 million samples/sec. However, when I integrated the exact same piece of code to my Win32 app., the rate has gone down to 8.714891 million samples/sec. My PCI link is running at 33 Mhz so the PCI clearly has nothing to do with this problem
    I have been looking through NI Discussion forum to find an answer for this and found a person (look at jim_monte's thread "Improving NI-SCOPE performance ") who is experiencing a similar kind of problem. He noticed while executing his program that what appears to be unnecessary DLLs are being loaded.
    Is my problem caused by something that jim_monte suggests or do you have any other explanation to my issue?

  • Generated Pulse waveform is distorted when I deliver the signal to the output port in the DAQmx

    Problem: Generated Pulse waveform is distorted when I deliver the signal to the output port in the DAQmx.
    Environment: Windows XP sp3 (32bit), Visual Studio 2010 sp1, NI-Measurement Studio 2010
    Device: NI - DAQmx PCI 6251
     Analog Input: 1.00MS/s multi-channel (aggregate)
     Analog Output: 2 Channel 2.00MS/s
    Reference Example: AO_ContGenVoltageWfm_IntClk / AI_ContAcqVoltageSamples_IntClk
    Generated Pulse:
    1) AO0 = Square Waveform /0-5V / 8KHz / 0.5㎲/sample / sample 50% Duty
    2) AO1 = Square Waveform /0-5V / 8KHz / 0.5㎲/sample / (Reverse Image)
    Description: I’d like to deliver the waveform stream satisfied with specified constraints to the 2 channel output port in the DAQmx. To verify accuracy of the generated waveform, I did an electrical wiring from the Analog output channel (2 channels) to the Analog Input channel (2 channels) in DAQmx. As a result of this experiment, I could get a result which has signal distortion. Since the waveform has to satisfy with both high frequency (8KHz) and very short moment interval time (△t = 0.5㎲/sample) between samples, I cannot handle some parameters of the function in the referenced VC++ example. Following formulas shows an approach to deliver the generate pulse waveform to output port satisfied with constraints.
    Analog Output Channel
     Frequency = 8,000 cycles/sec (constraint)
     Sample per Buffer = 2,000,000 = 2*106 samples/buffer
     Cycles per Buffer = 80,000 cycles/buffer
     Sample per Channel = 1,000,000 = 1*106 samples/channel
     Sample Rate  = Frequency * (Sample per Buffer / Cycle per Buffer)
                              = 8,000 * (2*106 / 80,000) = 2*106 samples / sec
     △t  = 1 sec / 2*106 samples / sec
               = 0.5 * 10-6 sec/sample (constraint)
     Buffer Cycle  = Sample Rate / Sample per Channel
                              = (2*106 samples / sec) / (1*106 samples/channel)
                              = 2 channel / sec
    Analog Input Channel
    Sample per Channel = 1,000,000 = 1*106 samples/channel
     Sample Rate  = 1 MS/s * (2 Channel) = 5 * 105 Samples / Sec
    Program Code
    AO_ContGenVoltageWfm_IntClk / AI_ContAcqVoltageSamples_IntClk (VC++ Example)
    Result: The proposed approach was implemented in the experiment environment (VS2010, MStudio2010). As shown in Figure 1, we could get the unsatisfied result. Although I intended to make a ‘square’ pulse wave, the result looks like ‘trapezoid’ pulse wave (Figure.1). However, there is other result which was undertaken with different parameter condition. It looks like the square shape not the trapezoid shape.
    Please let me know what the conditions make the problem of signal distortion. (AO0 = Green line / AO1 = Red line)
    [Figure. 1] Frequency 8000 Hz / Cycle per Buffer = 8000 Result
    [Figure. 2] Frequency 1000 Hz / Cycle per Buffer = 1000 Result
    Questions: Please let me know following questions.
    1) Is it possible to deliver the generated pulse wave satisfied with constraints (f= 8KHz), △t = 0.5㎲/sample) to the output port without distortion using PXI 6251?
    (Is it possible to solve the problem, if I use the LabView or MAX?)
    2) Are there some mistakes in the proposed approach? (H/W or S/W program)
    3) What is the meaning of the Cycle per Buffer?, It could effect to the result?

    Hi Brett Burger,
    Thanks for your reply. For your information, I have set the sampling rate as 10000 as for the sound format, I have set the bits per sample as 16 bit, the rate as 11025 and the sound quality as mono. I tried using your method by changing the sampling rate as 8K but still my program encounter the same problem.
    I wish to also create a button that is able to generate a preformatted report that contains VI documentation, data the VI returns, and report properties, such as the author, company, and number of pages only when I click on the button.  I have created this in my program, but I am not sure why is it not working. Can you help troubleshoot my program. Or do you have any samples to provide me. Hope to hear from you soon.
    Many thanks.
    Regards,
    min
    Attachments:
    Heart Sounds1.vi ‏971 KB

  • Time stamps not linear

    Hi,
    I have a DAQ USB-6251 driven by labview.
    The time stamps on the text file have backward and forward jumps in them.  See attached for picture.
    They increase linearly for about 12000 samples, then jump backwards
    about a 3 sec, linearly increase for about 1000, then jump forward
    about 6 seconds.  This cycle then repeats ie increasing nice and
    linearly for about 12000 samples, jumping back etc.
    I suspect there is a buffer somewhere which fills up and then acts strangly when trying to reset.
    Any explanations?  Any solutions?
    Here's some further info about my setup:
    Producer loop:
    I generate a continuous analog output signal which I feed back into one of analog input pins which I also continuously acquire.
    The generation and acquisition both have sample rates of 4096
    samples/sec and 1024 samples are set to acquire/generate per loop.
    The recorded input is enqueued.
    Consumer loop:
    Dequeues data and write to text file.
    Regards,
    Jamie
    Using Labview version 8.0
    Attachments:
    time stamps not linear.jpg ‏17 KB

    Ben,
    Maybe this is a good chance for me to learn (be convinced of) something then.  I'd try this out for myself on hw, but I've got tests running now.
    Let's suppose I have both an AI and an AO task, both set up to start off the same sample clock.  However, the AO outputs on the leading edge of a clock while the AI samples on the trailing edge of the clock.  Let's further stipulate that the clock is generated by an on-board counter at 5 kHz with 90% duty cycle.  So the AO update occurs 180 microseconds before the AI.  How do waveforms handle this offset in t0?  Will t0 simply be set to 0 because of the use of an "external" sampling clock?  Or will the two t0 values be equal and non-zero?  Or will they be sometimes equal and sometimes different, depending on the "phase" of the system clock -- either the 1 msec one or the 16 msec one used for system timestamps?
    Now, concerning triggering:  Does t0 represent the time of the trigger?  Or of the first sample / output *after* the trigger?  Or is it the time you call DAQmx Start prior to receiving a trigger signal?
    Other concern: even when not *strictly* necessary, I try to make a habit of making code that runs pretty efficiently, unless that puts an undue burden on development / maintenance effort.  My experience with processing large arrays vs. processing clusters containing large arrays has suggested that pure arrays are typically significantly more efficient to manage.  (I'm sure it depends on sizes and kinds of processing too.)  Aren't waveforms essentially cluster-like?
    Well, enough of the blah, blah, blah.  I really *am* interested.  I know many of the analysis functions prefer (if not require) waveform inputs rather than raw arrays these days, so there are some clear code simplicity advantages to waveforms IF I can be convinced that I'm fully informed of the downsides and gotchas.  (Another example of worry: when integrating a waveform, how does the floating point roundoff accumulate from the 'dt' value?  Will results late in a long array contain more cumulative roundoff error?)
    -Kevin P.

  • Writing arrays to text data with continious time stamp

    Hi Friends,
    I have the following problem.
    I want to create a text file looking like this:
    12,30,10, 10:30:00     12.05.2007
    12,31,10, 10:30:00,1  12.05.2007
    12,32,10, 10:30:00,2  12.05.2007
    12,33,10, 10:30:00,3  12.05.2007
    12,34,10, 10:30:00,4  12.05.2007
    12,35,10, 10:30:00,5  12.05.2007
    12,36,11, 10:30:00,6  12.05.2007
    12,37,12, 10:30:00,7  12.05.2007
    12,38,13, 10:30:00,8  12.05.2007
    12,38,16, 10:30:00,9  12.05.2007
    12,39,18, 10:30:01,0  12.05.2007
    12,39,18, 10:30:01,1  12.05.2007
    12,39,18, 10:30:01,2  12.05.2007
    the first 3 values are read from a file ,recorded as array and than plotted in chart.
    the beginning time (e.g 10:30:00  12.05.2007) is also read from a file and set as beginning of the chart with a property node. The problem is here , after I set the beginning time to the Chart I have nothing to do with it for the rest values, since samples/sec is also sent to chart.
    So how can I write the a txt file including the time stamp with ms incrementations?
    Thanks alot for your responces.
    john

    Hi John.
    I would use a Time Stamp and "Format Into String" with e.g. %<%.1X>T.
    Syntax details are in the LabVIEW Help - Search for "Format Specifier Examples".
    Regards, Guenter

Maybe you are looking for