Buffer size and samples per channel

I have a question regarding the allocation of the output buffer. I have
a digital I/O card PCI-6534 and I use the driver NI-DAQmx 7.4.0f0. I
would like to generate a digital output using different clock rates.
For example, I need to write 500 samples at 1000 samples per second and
other 500 samples at a rate of 10000 samples per second. the simplest
solution is to prepare two different waveforms, write the first one on
the buffer and generate the output. Then I can load the second waveform
on the buffer and generate the second output. In order to minimize the
delay between the two output sequences, I would like instead to write
the buffer once. I tried to set the buffer size to 1000, and the number
of samples per second to 500, but it doesn't work. There is a way to
set independently the buffer dimension and the number of samples which
have to be generated?

I can post the whole thing but I'll talk a little about it. It's a producer consumer loop with a continuous analog input sample from external clock program in the producer loop. Right now the consumer loop has a simple write to spreadsheet VI in it but eventually I want to average each revolution (well, two since it's a four stroke but that's neither here nor there) of pressure traces and spit out a single curve.
The wiring is simple. I have a voltage supply feeding the encoder and the quadrature A input on PFI 8 of the 6212 DAQ. I also have the Z index plugged in but nothing else to it. The analog input is a BNC right to AI 0. I can make a diagram if you want one. I've scoped the rotary encoder output and it looks great, very square with small relative rise times.
Attachments:
Yanmar Program V2.vi ‏46 KB

Similar Messages

  • Sample clock vs sample per channel per second

    Referring to this link http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/dbe7ac32661bcf9b86256ac000682154?OpenDocument
    What's the difference between sample clock and sample per channel per second? Looks the same to me.

    The sample clock is the clock that sets the timebase for channel operations on the board.  The samples per channel per second is the actual rate at which data can be transferred on a particular channel.
    Alex A.
    Applications Engineer

  • Sample per channel and sample to read

    Hello everybody
    I am new in LABVIEW and I have some difficulties with something.
    I don t know exactly what is the difference between the sample per channel et the sample to read. I believe knowing that the sample per channel is the size of the buffer which is more big than the sample rate but I don t know what is the sample to read.
    I ve tested with different sample per channel and sample to read. Sometimes I have an error and sometimes know and I would like to know why. If you have any example for I understand better, it will be great.
    I really need to understand this part for my project
    Thank for your help
    Romaric GIBERT
    Solved!
    Go to Solution.

    Hi Roro,
    As you mentioned, when acquiring continuous samples you can specify the sample buffer size by placing a value at the input "samples per channel" on the timing vi. The "number of samples per channel" input on the read vi which automatically names a control/constant with "samples to read" specifies the number of samples you wish to pull out of the buffer in one go when reading multiple (N) samples. This link may provide a bit more clarification. I have also attached a good example from the NI example finder which you may find useful to explore. I'm assuming you are using the DAQmx driver set so please let me know if this is not the case, but the same principles should apply either way.  
    This therefore means when sampling at a given rate, you need to ensure you are pulling data out in big enough 'chunks' to prevent the buffer from overflowing (which may well be causing the error you are seeing). Conversely if your sampling rate is slow and your read vi is having to wait for the number of samples to read you specified to be available, it may throw a timeout error. You can avoid this by either increasing your sampling rate, reducing your samples to read or increasing the timeout specified at the read vi input (-1 means it will wait indefinitely).
    Let me know if this helps and how you get on.
    All the best.
    Paul
    http://www.paulharris.engineering
    Attachments:
    Cont Acq&Graph Voltage-Int Clk.vi ‏27 KB

  • Multiple channels acquisition and Number of samples per channel

    Hi,
    I'm a new labview user and I need some help in trasposing an old traditional NI-DAQ acquisition software in a NI-DAQ-mx one.
    I followed the tutorial (#4342) but I found a problem with Analogue 2D DBL Multiple Channels Multiple Samples DAQmx Read Vi.
    I'm trying to acquire 8 voltage signals from a NI-USB-6341 device. When the Number Of Samples Per Channel I set is below the number of channels (8), the software acquires only a number of channels equal to the number of samples per channel I set.
    Is that a normal behaviour?
    Thank you
    Daniele

    This is my code. The problem is the same with or without the code for the scan backlog indexing.
    Tomorrow I will try with the code from the example.
    Thank you
    Daniele
    Attachments:
    acq code.jpg ‏119 KB

  • How to use ni-6008 and build a four channel data acquisition at a rate of 250 samples per channel and display all the data in a waveform chart

    how to use ni-6008 and build a four channel data acquisition at a rate of 250 samples per channel and display all the data in a waveform chart 

    Hi kdm,
    please stick in one thread for the same topic!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Samples Per Channel Matter when Taking Continous Samples?

    So noob question of the day. I have been playing around with the lightbulb feature. Anyways, I notice my array size is greater than my samples per channel input. Does the DAQmx sample clock even look at the samples per channel input when set to continous?

    Did you check the help file for the DAQmx sample clock function?
    "samples per channel specifies the number of samples to acquire or generate for each channel in the task if sample mode is Finite Samples. If sample mode is Continuous Samples, NI-DAQmx uses this value to determine the buffer size."

  • How to Extract the Freq List, Cycles per Freq and Samples per Cycles of sweep waveform

    How to Extract the Freq List, Cycles per Freq and Samples per Cycles of sweep waveform
    I want to extract the freqency distribution, cycles per freqency, and samples per cylce of swept waveform, in order to output the same of swept waveform with I have acquired by NI DAQ card, tks!
    owen wan
    Attachments:
    Untitled 1.vi ‏2333 KB

    Look inside the palette called Signal Processing - Waveform measurements.  There are a lot of functions here that you can use to get the information you desire.  For instance, the Extract Tones function will output an array of clusters, with each cluster element giving the frequency, amplitude, and phase of the signal component.  Go through the entire arry to see each frequency component of the complex waveform.
    Also, in the Waveforms palette there is the Get Waveform Components function that will give you t0, dt, and Y components of the waveform.  1/dt should give you the sample rate.  See attached VI.
    - tbob
    Inventor of the WORM Global
    Attachments:
    WfmInfo.vi ‏4658 KB

  • Buffer Size and Latency

    I knew the 2 might be connected but I recently read some information which confused me. I thought a larger buffer size was good, but I've just read that increasing the buffer size lengthens latency. Can anyone enlighten me on the relationship between these; just simple answers, please.

    Audio is sent through the system in chunks. Chunks of the buffer size. Buffer size is measured in samples so a buffer setting of 256 samples rewquests and provides audio in 256 sample packets. As a real time application, this cannot be predicted so the buffer reads from the present/past as opposed to looking ahead of time. The chunk will be handed off to CoreAudio that much ( time ) later, in comparison. So...
    Larger buffer increases track/plugin count since you allow the computer more time to manage it's tasks.
    Decreasing allows for less total system throuhput ( = less latency ).
    Many people have found a good balance for this ( if they need to dynamically alter system resources ) to be:
    Low buffer size while recording, for minimal latency
    Then increase buffer for mixdown, allowing more pluggies and tracks.
    Your latency setting is also dependent on your audio hardware and system busses. So, a good interface and driver on a dedicated FireWire bus can allow you to decrease your buffer setting.
    J

  • Buffer size and recording delay

    Hi
    I use a Focusrite Saffire LE audio interface, and in my core audio window the buffer size is currently set at 256 and there is a recording delay of 28ms. I don't think I've ever altered these settings since I started to use Logic. I don't know if this is related, but I do notice a slight delay when I monitor what I'm recording -- a slight delay, for example, between my voice (as I'm recording it) through the headphones and my "direct" voice.
    Anyway, are these settings OK, or should they be altered for optimum performance?

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • T61p 200gb at 7200rpm buffer size and real world performance?

        Hi, I wanted to confirm what the buffer size on the 200gb 7200 option of the T61p had... there seems to be conflicting information regarding this model. I'd also appreciate any information regarding the real world performance of this particular drive...
    Thanks!
    Message Edited by carthikv12 on 05-19-2008 09:31 AM
    Solved!
    Go to Solution.

    both the hitachi and seagate 200GB/7200RPM drives used in the T61p have 16MB of onboard cache.   performance tests of these two drives are scattered across the internet.   search google for "hitachi 200GB 7K200" and "seagate 200GB 7200.3" respectively.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • Fehler nach "Samples per Channel"

    Hallo,
    Ich verwende das cRIO System NI 9074 mit AD-Wandlermodul NI9239. Hier sollen Daten kontinuierlich gewandelt und anschließend gespeichert werden. Allerdings gehen hierbei immer nach einer Sampleanzahl von „Samples pro Channel“ Daten verloren. Dies wird auch im in LabVIEW angezeigten Graphen sichtbar. Bei einem Sinussignal am Eingang ist dann ein Sprung erkennbar.
    Der Speicher des FIFOs ist großgenug, zumindest wird mit kein Überlauf angezeigt. Kann mir irgendjemand weiterhelfen? 
    jukr
    P.S. Ich verwende LabVIEW 2010
    Attachments:
    FPGA.vi ‏77 KB
    PC.vi ‏365 KB

    Hi Jukr,
    it is useful to write forum posts in english; you will get many more replies :-)
    I don't expect the error to be caused by the measurement. Maybe there are some parts in the network streaming that are not stable enough?
    Just have a look at the following example and compare it to your network connections. You might find it useful!
    Reference Example for Streaming Data from FPGA to cRIO to Windows
    If the example won't help you finding the error, please attach your project.
    Have a nice day,
    Stefan Egeler
    NI Germany

  • Out of Memory Error, Buffer Sizes, and Loop Rates on RT and FPGA

    I'm attempting to use an FPGA to collect data and then transfer that data to a RT system via a FIFO DMA, and then stream it from my RT systeam over to my host PC. I have the basics working, but I'm having two problems...
    The first is more of a nuisance. I keep receiving an Out of Memory error. This is a more recent development, but unfortunately I haven't been able to track down what is causing it and it seems to be independent of my FIFO sizes. While not my main concern, if someone was able to figure out why I would really appreciate it.
    Second, I'm struggling with overflows. My FPGA is running on a 100 MHz clock and my RT system simply cannot seem to keep up. I'm really only looking at recording 4 seconds of data, but it seems that no matter what I do I can't escape the problem without making my FIFO size huge and running out of memory (this was before I always got the Out of Memory error). Is there some kind of tip or trick I'm missing? I know I can set my FPGA to a slower clock but the clock speed is an important aspect of my application.
    I've attached a simplified version of my code that contains both problems. Thanks in advance for taking a look and helping me out, I appreciate any feedback!

    David-A wrote:
    The 7965 can't stream to a host controller faster than 800MB/s. If you need 1.6GB/s of streaming you'll need to get a 797x FPGA module.  
    I believe the application calls for 1.6 GB over 4s, so 400 MB/s, which should be within the capabilities of the 7965.
    I was going to say something similar about streaming over ethernet. I agree that it's going to be necessary to find a way to buffer most if not all of the data between the RT Target and the FPGA Target. There may be some benefit to starting to send data over the network, but the buffer on the host is still going to need to be quite large. Making use of the BRAMS and RAM on the 7965 is an interesting possibility.
    As a more out there idea, what about replacing the disk in your 8133 with an SSD? I'm not entirely sure what kind of SATA connection is in the 8133, and you'd have to experiment to be sure, but I think 400 MB/s or close to that should be quite possible. You could write all the data to disk and then send it over the network from disk once the aquisiton is complete. http://digital.ni.com/public.nsf/allkb/F3090563C2FC9DC686256CCD007451A8 has some information on using SSDs with PXI Controllers.
    Sebastian

  • Tangent In and Out per channel?

    I can get the tangent in and out for the curve as shown in the composite view, but is it possible to obtain the tangents from the individual x, y, and z curves as shown in the graph editor?

    Routing id is per agreement right?Routing id is per document definition.
    I am trying to listen all the message flowing through B2B engineJust do not specify anything in routing id for all your document definitions. In this case all inbound messages will be enqueued with recipient as "B2BUSER".
    Regards,
    Anuj

  • Is it possible to adjust FPS,video size and quality per the bitrate while playing a live stream?

    So far , I only found tutorials on control these attributes while publishing the stream,
    how can I adjust it while playing ?

    That article judges the band width and assigns a proper video(say these videos are prepared before hand) to play.
    But I can't get multiple live video streams with different settings out of a single pc camera.
    And I can't require each user to install multiple cameras to use the service.

Maybe you are looking for