Timebase data for high sample rate

Hi.
I am running a Labview program which is sampling data from a strain gauge module in a CDAQ unit at 2kHz.
The data is being logged to a TDMS file. The porblem I am having is finding a clock that is quick enough to use as a time base for this data. The clock that I am currently using is too slow and I see repeated time data for a few rows in the data file.
See attached picture.
So basically I need the clock data that can update at least 2000 times a second.
Thankyou in advance.
Rhys.

As I already said, the internal timebase is more than capable so if you are reading multiple samples and specifying the internal clock, you should not have any problems. I don't know if your problem is how you are recording the data or reading it but the samples should be .5ms apart.

Similar Messages

  • Can we place Analog in Read(AI-RE​AD) Vi inside the while loop for high sample rate like 22ks/s?

    I am using E-series Card for data acquisition.My requirement is to sample the channel, and check the 10 samples for certain condition.both at a time.What should be done can we place the AI-READ vi inside for or while loop for this purpose?

    Hello,
    Yes, you can include the AI Read.vi inside the while loop, you would just need to specify the number of scans to read for every iteration of the loop. Then, after AI Read.vi has read the data, you can do what ever kind of manipulation of the data you would like, before the next iteration of the loop. The one thing to watch out for is what ever manipulation of the data you do, be sure that it doesn't take to long whereas the buffer holding the data starts to back up. That can be checked by looking at the scan backlog output of the AI Read.vi, which will tell you how many scans have been acquired but haven't been read into your program.
    Hope this helps!
    Regards,
    Steven B.
    Applications Engineering, NI

  • How to build a array with high sampling rates 1K

    Hi All:
    Now I am trying to develop a project with CRio.
    But I am not sure how to build a array with high sampling rates signal, like >1K. (Sigle-point data)
    Before, I would like to use "Build Arrary" and "Shift Register" to build a arrary, but I found it is not working for high sampling rates.
    Is there anyother good way to build a data arrary for high sampling rates??
    Thanks
    Attachments:
    Building_Array_high_rates.JPG ‏120 KB

    Can't give a sample of the FPGA right now but here is a sample bit of RT code I recently used. I am acquiring data at 51,200 samples every second. I put the data in a FIFO on the FPGA side, then I read from that FIFO on the RT side and insert the data into a pre-initialized array using "Replace Array subset" NOT "Insert into array". I keep a count of the data I have read/inserted, and once I am at 51,200 samples, I know I have 1 full second of data. At this point, I add it to a queue which sends it to another loop to be processed. Also, I don't use the new index terminal in my subVI because I know I am always adding 6400 elements so I can just multiply my counter by 6400, but if you use the method described further down below , you will want to use the "new index" to return a value because you may not always read the same number of elements using that method.
    The reason I use a timeout of 0 and a wait until next ms multiple is because if you use a timeout wired to the FIFO read node, it spins a loop in the background that polls for data, which rails your processor. Depending on what type of acquisition you are doing, you can also use the method of reading 0 elements, then using the "elements remaining" variable, to wire up another node as is shown below. This was not an option for me because of my programs architecture and needing chunks of 1 second data. Had I used this method it would have overcomplicated things if I read more elements then I had available in my 51,200 buffer.
    Let me knwo if you have more qeustions
    CLA, LabVIEW Versions 2010-2013
    Attachments:
    RT.PNG ‏36 KB
    FIFO read.PNG ‏4 KB

  • How can I improve the speed of my VI to work in real-time at higher sample rates?

    I am currently trying to implement a multi-channel control system, the vi of which is attached. It effectively consists of a number of additions and multiplications in the processing subvi (also attached), however, for 7 inputs and 7 outputs I cannot get it to work at sample rates higher than 3kHz without experiencing an overwrite error. Does anyone have any tips as to how I can get the processing to work more efficiently such that I can get it to work at higher sample rates?
    Attachments:
    Simul_AIAO_Buffer(Two_Boards)_control_FXLMS_for_1_leaky_no_SCXI.vi ‏128 KB
    FXLMSsub_leak_v2.vi ‏70 KB
    filtered_ref_1_chan.vi ‏39 KB

    Hi mattwilko,
    I believe the first issue that you should address is the building of arrays.
    This is mainly happening on the edge of your loops.
    This is my reasoning.
    Allocating memory in RT to handle an array that is growing will blow away your determinism.
    I would suggets pre-allocating all of your storage and work with the data in-place.
    In LV 7.1 there are some great new tools that will help nail down issue of this type.
    If you have LV 7.1 you may want to upgrade so you can take advantage of the new tools.
    Trying to help,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Record data at one sample rate and calculate velocity

    I need to record position data at a sample rate of 10 hz but I would like to calculate velocity every half a second.  What is the best way to do this so the velocity display does not jump?  I want to capture quick responses but I would also like to calculate the velocity over a longer period of time like 1 minute. 

    I am trying to calculate vertical speed based on pressure altitude data.  I am using serial communication to measure pressure at 10 Hz.  I am using Labview 8.2 for all of the calculations and measurements. 
    I am looking for some way to average the data over some time interval that I have yet to determine.  I want to be able to see any major pressure changes whithin about .5 seconds but I also need to see trends over a longer time period say 2 to 10 seconds. 
    I have thought about using an array and then averaging the array or implimenting a rolling digital filter. 

  • On the fence for which sample rate to record at (44.1 vs 96)

    Been reading tons of posts on the sampe rate debate.  My friends (across the country) and I are about to start to collaborate on the great American rock album that we didn't quite get right back in the day in college.  I'll be running the show sending them scratch tracks with clicks so they can lay down individual tracks and I'll import them.
    I'm torn on which sample rate(s) to use -- and want the best quality possible, of course.  I've boiled it down to the following pros per sample rate.  Any advice/comments much appreciated.  thanks
    44.1 Pros
    - Friends across the country can use GarageBand (44.1kHz is max) to lay down a single track and send to me to import into Logic Pro and mix (and will match up)
    - GarageBand is free, Logic Pro is $199; Apogee JAM is $99, Jam96kHz is $129
    - GarageBand requires fewer resources (1GB RAM vs 4GB for Logic Pro) so they don’t have to have beefy Macs
    - Smaller file size to post/share (Dropbox cost per/storage size)
    - Picture mixing a bunch of 44.1 tracks vs a bunch of 96 tracks even if on my beefy Mac running Logic Pro; fan running, gasping for air etc?
    96 Pros
    - 96 sounds noticeably better than 44.1?
    - While 44.1 standard for CDs, that’s no longer how music is largely distributed
    - Should always record to max resolution you can be “future proofed”?
    - Current lossy formats support up to 48kHz so 96kHz good as will cut cleanly in half when bouncing
    88.2 Pros
    - Maybe choose this so friends using GarageBand can record at 44.1 so upscales more cleanly?  But then bouncing down to 48kHz not as clean? (I don’t recall Logic Pro allowing me to choose 44.1 for bounce rate.. always says 48 for MP3/M4A)
    thanks!

    rcook349 wrote:
    44.1 Pros
    - Friends across the country can use GarageBand (44.1kHz is max) to lay down a single track and send to me to import into Logic Pro and mix (and will match up)
    - GarageBand is free, Logic Pro is $199; Apogee JAM is $99, Jam96kHz is $129
    - GarageBand requires fewer resources (1GB RAM vs 4GB for Logic Pro) so they don’t have to have beefy Macs
    - Smaller file size to post/share (Dropbox cost per/storage size)
    - Picture mixing a bunch of 44.1 tracks vs a bunch of 96 tracks even if on my beefy Mac running Logic Pro; fan running, gasping for air etc?
    96 Pros
    - 96 sounds noticeably better than 44.1?
    - While 44.1 standard for CDs, that’s no longer how music is largely distributed
    - Should always record to max resolution you can be “future proofed”?
    - Current lossy formats support up to 48kHz so 96kHz good as will cut cleanly in half when bouncing
    88.2 Pros
    - Maybe choose this so friends using GarageBand can record at 44.1 so upscales more cleanly?  But then bouncing down to 48kHz not as clean? (I don’t recall Logic Pro allowing me to choose 44.1 for bounce rate.. always says 48 for MP3/M4A)
    thanks!
    44.1 kHz still is pretty much standard for MP3's.
    Your friends/collaborators can pretty much use any application that can record PCM (or even MP3) audio; even if they're not playing to a steady tempo, you can line everything up in Logic, with flex.
    Using Garageband and one set tempo should also work. Just remember that you cannot open Logic files in Garageband, only Garageband files in Logic. The Audio Files recorded by either, can be used (imported) by either.
    Higher sampling rates will not "future proof" anything. In fact, that whole concept is flawed. Your best bet for now is simply 44.1 kHz 24 bit uncompressed PCM files in their most widely used form: AIFF or WAV.
    96 does not noticeably sound better than 44.1, unless you have a top end interface and a very delicate and very complicated mix, and admirably acute hearing. In some interfaces 96 or 88.2 have been found to sound worse than 44.1, because of clocking inaccuracies getting progressively worse at higher sampling frequencies. I would stick to 44.1, it has lots of practical advantages (as you pointed out), and the sonic difference with 96 kHz is marginal at best, and certainly not worth the price: "double" rates need double the CPU power for any plugin processing. That's the biggest loss. Half a Mac.
    Bitdepth on the other hand does make a significant difference. There is no reason not to record everything at 24 bits. Shorter: always record at 24 bits.
    O, also just spotted your remark about Logic not "letting you" bounce MP3/M4a to 44.1 kHz. You must remember incorrectly, because I never bounce MP3 or AAC to any other frequency than 44.1 kHz. However, it may be that this rate is tied to the projects' sampling frequency as set in the project settings, and the last time I used 48 kHz was in LP 8. I'll check that now.

  • Need to Import Audio Books at higher sample rates in iTunes 8

    Folks,
    In iTunes 7, I could import audio books at 32k mono and they would sound good (after all, it's just someone reading - doesn't need stereo or high sample rate). However, in iTunes 8, for 32k the sound crackles and is distorted. Quality is OK at 64k mono but obviously now the files are twice as big. Am I right in slating the blame to iTunes 8? Is there some way I can use iTunes 7 encoding?

    I wouldn't have believed it if I didn't try it. To get a non-crackled 32kb mono AAC you have to:
    - Select the AAC Encoder's "Custom..." from the drop-down
    - Change the (stereo) bit rate to 64kbps
    - Leave sample rate on auto
    - Change channels to mono
    - (And most importantly) keep the "optimize for voice" box UNchecked
    I tested it several times with different audiobook CDs and settings and it seems that the voice optimization causes mono tracks of any bit rate to be garbled and crackle. Hope that helps!
    (And I'm pretty sure OS 10.5.6 is out. Check your Software Update under the Apple menu.)

  • Is it possible to detect low frequency signals with a high sampling rate?

    Hello everyone,
    I'm having an issue detecting low frequency signals with a high sampling rate.  Shouldn't I be able to detect the frequencies as long as the sampling rate is at least 2 times the highest frequency I will measure?  The frequency range I am measuring is 5-25 Hz, and I use Extract Single Tone.vi to measure the frequency.  The sampling setting I am using is 2 samples at 10 kHz.  Is there a method I can use to make this work?
    Attached is the vi.
    Attachments:
    frequencytest.vi ‏21 KB

    You are sampling at 10Ks/S, but only taking two samples. What do you expect to see? If your signals are binary (On or Off) you would only see either an on or an off, or if the rise/fall time was fast and you were Extremely lucky, one of each. If you want to see a waveform you have to sample for at least the period of a waveform. So you should take samples for at least 0.2 seconds to capture an entire waveform at 5Hz, ideally longer.   Think of looking at a tide change at a dock. If you want to see the entire tide change you will probably have to measure repeatedly over 24 hours, not just run out on the dock, measure the height twice and leave. That wouldn't tell you anything other than at that precise moment the tide height was X, but not that it was at high tide, low tide, in between, etc.
    I type too slowly, I see that a more technical answer has been given, so mine will be the philosophical one!
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • High sample rate data acquisition using DAQ and saving data continuously. Also I would like to chunck data into a new file in every 32M

    Hi: 
      I am very new to LabView, so I need some help to come up with an idea that can help me save data continuously in real time. Also I don't want the file to be too big, so I would like to crete a new file in every 32 mega bytes, and clear the previous buffer. Now I have this code can save voltage data to TDMS file, and the sample rate is 2m Hz, so the volume of data increase very fast, and my computer only have 2G ram, so the computer will freeze after 10 seconds I start to collect data. I need some advise from you briliant people.
    Thanks very much I really appreciate that. 
    Solved!
    Go to Solution.
    Attachments:
    hispeedisplayandstorage.vi ‏33 KB

    I am a huge proponent of the Producer/Consumer architecture.  But this is the place I advise against it.  The DAQmx Configure Logging does all of it for you!
    Note: You will want to use a Chart instead of a graph here.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    hispeedisplayandstorage_BD.png ‏36 KB

  • Why is it that I can't do a continuous streaming to disk with a 5102 scope card (PCI) when I can do it with a DAQ Card of much lower specs (my requirement is for small sampling rates only)?

    I am told that the 5102 Card (PCI) does not support continuous streaming of data to the hard disk. My application requires only very low sampling rates. If I can do it with a low spec DAQ Card using LabView why can't I do it with this card?

    Hello,
    The PCI-5102 is a high-speed digitizer card that has a slightly different architecture than the DAQ cards and was not built with the ability to stream data to the PC. However if you are sampling at low rates you can still acquire up to 16 million samples, which is done by using dma to tranfer data from the onboard memory on the 5102 to the PC memory. However, you will not be able to save the data to disk until the acquisition is complete.
    Another option would be to purchase either a DAQ card or a PCI-5112. Both boards can continuously stream data to the host PC and you should not run into any PCI bus limitations if you are stream to disk at relativiely slower rates.

  • High sampling rate is not accurate

    Dear alls,
    Any one know why the time loop is not accurate at defined period at the RealTime Trace Viewer ? In figure of VI, I set the period for the time loop is 250 us, but when I analyze in Real time Trace Viewer, it shows the real period time about 676 us. That means my data acquisition will be sampled in wrong sampling rate.
    Could you give me some explane, please? And how can I solve this problem? I tested this code with MyRIO board.
    Best Regards,
    Kien 

    crossrulz wrote:
    Why do you have a wait in your timed loop?  The point of the Timed Loop is to state the loop rate.
     ??? Bigger problems than that!  Why do you have any timing other that the TASK timing? And really, The task timing is most likely the loop rate.
    Can you post your actual VI?  I suspect something else is happening here that we cannot see (configurations, channels used, etc).
    And a snip of the Task needs posting

  • Conflict between the saved data and the sampling rate and samples to read using PXI 6070e

    Hello, I am using PXI 6070e to read an analog voltage. I was sampling at 6.6 MHz and the samples to read were 10. So, that means it should sample 10 points every 1.5 um. The x-axis of the graph on the control panel was showing ns and us scale, which I think because of the fast sampling and acquiring data. I use "write to measurement file" block to save the data. However, the data was saved every 0.4 second and as 35 points data at the beginning of each cycle (e.g. 35 points at 0.4 sec and 35 at 0.8 sec, and so on) and there was no data in between. Can anyone help me how there are 35 reading points every cycle? I could not find the relation between the sampling rate and samples to read, to 35 points every 0.4 second!
    Another thing, do I need to add a filter after acquiring the data (after the DAQ assistant block)? Is there anti-aliasing filter is built in PXI 6070e?
    Thanks for the help in advance,
    Alaeddin

    I'm not seeing anything that points to this issue.  Your DAQ is set to continuous acquire.  I'm not sure if this is really what you want because your DAQ buffer will keep overwriting.  You probably just want to set to Read N Samples.
    I'm not a fan of using the express VIs.  And since you are writing to a TDMS file, I would use the Stream to TDMS option in DAQmx.  If you use the LabVIEW Example Finder, search for "TDMS Log" for a list of some good examples.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Tape synchronization at high sample rates + recording at 192 Khz 24 bit

    I was expecting at least that Apple would fix Emagix synchronization problem that occurs when Logic is slave to a tape recorder at sample rate other than 44.1 or 48Khz.
    Further more I do get click , pops and digital noise when I try to master @ 192Khz 24 bit most of the time (not synchronized to any code).
    For archiving tapes or going back and forth to tape I use Digi 002 , Rosentahl WIF (SMPTE to MTC) and Pro tools , as Logic is incapable for such a task , and for mastering and mix-down I use Lynx AES 16 with dCS904 AD and dCS954 DA with LogicPro.

    Dear Mr. Logic 8 on Mac Intel.
    I would like to thank you for you suggestions but :
    1] The quality of LTC on my Otari 2inch/24Track is OK.
    2] The LTC to MTC interface that I use with Logic 8 is Unitor 8 MK II.
    3] My AD/DA convertors are considered to be la creme de la creme and I am pretty sure that they are OK.
    4] The HD is an external la cie and has to be fast as I can bounce 8 tracks @ 96 Khz-24Bit using Pro Tools LE synchronized to tape.
    5] The software is up to date and trust me Logic had serious sync issues several versions ago.
    6] I checked and double checked even multichecked the prefs and settings of Logic .
    It is interesting that Logic 8 locks @ 44.1/48 Khz but crashes @ any other rate.
    I wonder if the guys of C-Lab (that left Emagic ages ago) are the responsible ones for the synch code and if so I wonder if they could help Apple with the synchronization.
    Thank you again for even bothering to answer.
    Any help would be welcome.

  • Can I mix down to 32 bit at a higher sample rate than 44.1 kHz ?

    When I use Ableton Live, it lets me choose 16, 24, or 32 bit, and then I can choose a sample rate all the way up to 192000.  Is this possible in Audition ?  I have been going through all the preferences and all the tabs and I can't find this option.  All I find is a convert option, or the adjust option.  But that's not what I want.  I want to mix down this way.
    The closest thing I found is when I go to "Export Audio Mix Down", I can select 32 bit.  Then there is a box for sample rate, with all the different values.  But it won't allow me to change it from 44100.

    JimMcMahon85 wrote:
    Can someone explain this process in laymens terms:
    http://www.izotope.com/products/audio/ozone/OzoneDitheringGuide.pdf ---> specifically Section: VIII "Don't believe the hype"
    I don't read graphs well, can someone put in laymens terms how to do this test, step by step, and where do i get a pure sinewave to import into audition in the first place??
    UNbelievable: So I have to first run visual tests using a sinewave to make sure dither is working properly, then do listening tests with different types of dither to hear which I like best on my source material, and then for different source material it's best to use different types of dither techniques???... Am I getting this right???...
    Hmm... you only need to run tests and do all that crap if you are completely paranoid. Visual tests prove nothing in terms of what you want to put on a CD - unless it's test tones, of course. For the vast majority of use, any form of dither at all is so much better than no dither that it simply doesn't matter. At the extreme risk of upsetting the vast majority of users, I'd say that dither is more critical if you are reproducing wide dynamic range acoustic material than anything produced synthetically in a studio - simply because the extremely compressed nature of most commercial music means that even the reverb tails drop off into noise before you get to the dither level. And that's one of the main points really - if the noise floor of your recording is at, say, -80dB then you simply won't be hearing the effects of dither, whatever form it takes - because that noise is doing the dithering for you. So you'd only ever hear the effect of LSB dither (what MBIT+, etc. does) when you do a fade to the 16-bit absolute zero at the end of your track.
    Second point: you cannot dither a 32-bit Floating Point recording, under any circumstances at all. You can only dither a recording if it's stored in an integer-based format - like the 16-bit files that go on a CD. Technically then, you can dither a 24-bit recording - although there wouldn't be any point, because the dither would be at a level which was impossible to reproduce on real-world electronics - which would promptly swamp the entire effect you were listening for with its own noise anyway. Bottom line - the only signals you need to dither are the 16-bit ones on the file you use for creating CD copies. And you should only dither once - hence the seemingly strange instructions in the Ozone guide about turning off the Audition dither when you save the converted copy. The basic idea is that you apply the dither to the 32-bit file during the truncation process - and that's dithered the file (albeit 'virtually') just the once. Now if you do the final file conversion in Audition, you need to make sure the dithering is turned off during the process, otherwise all the good work that Ozone did is undone. What you need to do is to transfer the Ozone dithering at the 16-bit level directly to a 16-bit file proper, without anything else interfering with it at all. So what you do with Ozone is to do the dithering to your master file, and save that as something else - don't leave the master file like that at all. After saving it, undo the changes to the original, in fact - otherwise it's effectively not a file you could use to generate a master with a greater-than-16-bit depth from any more, because it will all have been truncated. Small point, but easy to overlook!
    just what's the easiest way to test if a simple dithering setting is
    working for 32-bit down to 16-bit in Audition?...  Why is there no info
    about dithering from 32 bit to 16 bit (which is better then dithering
    from 24-bit isn't it)?
    I hope that the answers to at least some of this are clearer now, but just to reiterate: The easiest way to test if its working is to burn a CD with your material on it, and at the end of a track, turn the volume right up. If it fades away smoothly to absolute zero on a system with lower noise than the CD produces then the dither has worked. If you hear a strange sort-of 'crunchy' noise at the final point, then it hasn't. There is info about the 32 to 16-bit dithering process in the Ozone manual, but you probably didn't understand it, and the reason that there's nothing worth talking about in the Audition manual is because it's pretty useless. Earlier versions of it were better, but Adobe didn't seem to like that too much, so it's been systematically denuded of useful information over the releases. Don't ask me why; I don't know what the official answer to the manual situation is at all, except that manuals are expensive to print, and have also to be compatible with the file format for the help files - which are essentially identical to it.
    Part of the answer will undoubtedly be that Audition is a 'professional' product, and that 'professionals' should know all this stuff already, therefore the manual only really has to be a list of available functions, and not how to use them. I don't like that approach very much - there's no baseline definition of what a 'professional' should know (or even how they should behave...), and it's an unrealistic view of the people that use Audition anyway. Many of them would regard themselves as professional journalists, or whatever, but they still have to use the software, despite knowing very little about it technically. For these people, and probably a lot of others, the manual sucks big time.
    It's all about educating people in the end - and as you are in the process of discovering, all education causes brain damage - otherwise it hasn't worked.

  • Wave form chart for different sampling rate

    Hi
    All,
    I have to use different sampling rate to get  pressure data. Can I use the waveform chart to monitor the pressure data with time?
    If not, what kind of graph should I use?

    Different sampling rate than what? A chart or graph can be used. Depends mostly on how you want to update it.

Maybe you are looking for