DASYLAB QUERIES on Sampling Rate and Block Size

HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
Qn1: Is there any way to solve the time difference problem for high SR?
Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
Qn2: Is there any best combination of the block size and sampling rate that can be used?
Hope someone is able to help me with the above problem.
Thanks-a-million!!!!!
Message Edited by JasTan on 03-24-2008 05:37 AM

Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
If your sample rate is 1000, the block size should be 500 to no smaller than 100.
Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
Q3 - See above.
It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
- cj
Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

Similar Messages

  • DASYLAB QUERIES on Sampling Rate and global variable

    hola
    i'm new user of dasylab and i would like to manipulate the sample rate through a layout ;
    using global variable seems good idea but the adress of sample rate is unknown .
    Hope that someone would be able to help.!
    also other small things when i use coded swich and i want modify  the text in the Switch Window i don't know how
    Lots of thanks!

    Hi,
    There is a dedicated place in this forum where you'll certainly get more answers than here :
    http://forums.ni.com/t5/DASYLab/bd-p/50
    Regards,
    Da Helmut

  • Conflict between the saved data and the sampling rate and samples to read using PXI 6070e

    Hello, I am using PXI 6070e to read an analog voltage. I was sampling at 6.6 MHz and the samples to read were 10. So, that means it should sample 10 points every 1.5 um. The x-axis of the graph on the control panel was showing ns and us scale, which I think because of the fast sampling and acquiring data. I use "write to measurement file" block to save the data. However, the data was saved every 0.4 second and as 35 points data at the beginning of each cycle (e.g. 35 points at 0.4 sec and 35 at 0.8 sec, and so on) and there was no data in between. Can anyone help me how there are 35 reading points every cycle? I could not find the relation between the sampling rate and samples to read, to 35 points every 0.4 second!
    Another thing, do I need to add a filter after acquiring the data (after the DAQ assistant block)? Is there anti-aliasing filter is built in PXI 6070e?
    Thanks for the help in advance,
    Alaeddin

    I'm not seeing anything that points to this issue.  Your DAQ is set to continuous acquire.  I'm not sure if this is really what you want because your DAQ buffer will keep overwriting.  You probably just want to set to Read N Samples.
    I'm not a fan of using the express VIs.  And since you are writing to a TDMS file, I would use the Stream to TDMS option in DAQmx.  If you use the LabVIEW Example Finder, search for "TDMS Log" for a list of some good examples.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Maximum audio sample rate and bit depth question

    Anyone worked out what the maximum sample rates and bit depths AppleTV can output are?
    I'm digitising some old LPs and while I suspect I can get away with 48kHz sample rate and 16 bit depth, I'm not sure about 96kHz sample rate or 24bit resolution.
    If I import recordings as AIFFs or WAVs to iTunes it shows the recording parameters in iTunes, but my old Yamaha processor which accepts PCM doesn't show the source data values, though I know it can handle 96kHz 24bit from DVD audio.
    It takes no more time recording at any available sample rates or bit depths, so I might as well maximise an album's recording quality for archiving to DVD/posterity as I only want to do each LP once!
    If AppleTV downsamples however there wouldn't be much point streaming higher rates.
    I wonder how many people out there stream uncompressed audio to AppleTV? With external drives which will hold several hundred uncompressed CD albums is there any good reason not to these days when you are playing back via your hi-fi? (I confess most of my music is in MP3 format just because i haven't got round to ripping again uncompressed for AppleTV).
    No doubt there'll be a deluge of comments saying that recording LPs at high quality settings is a waste of time, but some of us still prefer the sound of vinyl over CD...
    AC

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • HT5848 What is the sampling rate and codec for iTunes Radio. Is it lossless encoded?

    What is the sample rate and codec for iTunes Radio? Is it lossless encoded?

    I have to agree with you.  There are several forum discussions on bit rate being as high as 256 kbps but I don't see how it could be more than 96 kbps based on the poor sound quality I'm hearing.  I'm comparing it to an internet radio station that is 128 kbps and sounds much better.
    Am I missing something?

  • Sample rate and / or Bit depth probl

    I am in the middle of mastering a tune, but when I come to play the tune in Soundforge 8 I get the error message: One or more playback devices do not support the current Sample rate and / or Bit depth. I am using a Audigy2 Platinum with the ASIO A400 driver and I'm sure it should be A9000, I can't find the driver update for this, does anyone have a link to this, or is it something else I should be looking at?

    A400 doesn't have anything to do with a version number. It's related to the ressource allocated to your card.
    What kind of source are you trying to play ?

  • PXI-4462 actual sample rate and delay...

    Hi All!
    Is the DAQmx properties, which give me actual sample rate and filter delay? Or I must manualy calculate this values, as described in DSA manual?
    Jury

    Hi Jury,
    I am not sure if I fully understand your question, but there are DAQmx properties that do return the actual sample rate, but there is not one for filter delay. If you place a DAQmx Timing property node down, the actual sample rate will be returned by selecting Sample Clock » Rate.
    Regards,
    Jason D
    Applications Engineer
    National Instruments

  • Changing sampling rate and bitrate

    If I drop in an .aiff audio file into Garage Band 2 (or iTunes), is there a function that allows me to adjust the audio sampling rate and bitrate?
    When I created this podcast in Adobe Auditions, the were about 10MB for a 22 min show.
    When I do it through GB, it's about 26MB for the same time;
    I just want to tune it down slightly.
    Thanks!

    GB exports uncompressed 44.1K 16-Bit AIFF files. You need to convert them to Mp3s or AAC files
    http://thehangtime.com/gb/gbfaq2.html#converttomp3

  • Multi Track settings (Sample Rate and Bit Rate)

    I'm setting up my multi-track in 5.5 as 44100 16 bit, but when I create it...it's saying its 44100 32 bit (at the bottom of the multi-track).  Is there a way I can change this setting?  I've explored the preferences all day long and still can't find any answers.
    Thanks

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • Restore Old LP'S AT 96000 Hz Sample Rate and 24 bit Resolution?

    I restore LP vinyl albums using the above Sample Rate and Resolution.
    Do I have to convert to 44100 Hz, and 14-bit before adding to my iTunes Library?

    Ringmaster wrote:
    Thanks for your help!
    I guess I had better keep all of them in this format. It might make a difference if I decide to burn CD's; also, all of my back ups will be of the same format.
    Do what works for you, but neither of those two things is an advantage. If you burn audio CDs, they will be downsampled to 44.1 kHz anyway. And as far as backups, iTunes does not care if different songs are in different formats, nor does any other music player that I know of.
    Unless you plan on further mastering or remixing, or you really just like the higher fidelity, there is no obvious reason not to use 44/16.

  • Why LV 6.1 and Nidaq 6.9.3 can't acquire low sampling rate and high frame size

    My platform is LV 6.1, Nidaq 6.9 on win98
    Everytime i want to acquire data with sampling rate 6 Hz
    and frame size 7200, it acquire just 2 or more data (under 10 data), never 7200 data.

    Greetings,
    I assume by "frame size" you are referring to the numbers of samples to acquire. Please launch the "Find Examples" browser in LabVIEW 6.1. Open the example "Acquire N Scans.vi." This VI has only 4 inputs. After setting your device and channel you will set number of scans to acquire to 7200 and your scan rate will be 6. Since you're only acquiring at 6Hz and wanting 7200 samples, this VI will take 20 minutes to run. At that point the graph will be updated with your data.
    Regards,
    Justin Britten
    Applications Engineer
    National Instruments

  • Buffer data before chart it and block size

    I hope you can help me with this situation because I have been stoped in this situation for two days and I think that I won't see the light in the short time and the time is a scarce resource.
       I want to use a NI DaqCard-AI-16XE-50 ( 20KS/s accordig to specifications). To acquired data in DasyLab I've been using OPC DA system but when I try get a chart from the signal I get awfull results. 
      I guess the origin of the problem is the PC is not powerfull to generate a chart in real time, so Is there a block to save the data, then graph the data without use "Write Data" Block to avoid write data to disk? 
      Another cause of the problem it could be an incorrect set value of Block size, but in my piont of view with the 10Khz, and 4096 of block size is more than neccesary to acquire a signal of 26[Hz] (showing in the photo). If I reduce the block size to 1 the signal showing in the graph is a constant of the first value acquire. Why could be this situation?
    Thanks in advance for your answers, 
    Solved!
    Go to Solution.
    Attachments:
    data from DAQcard.PNG ‏95 KB

    Is there someone who can help me
    I connect CN606TC2 and Dasylab 11... using the RS232 cable, and have done the instruction manual of the book cn606tc2.
    In dasylab RS232 module, there is a box:
    1). Measurement data request,
    2). Measurement data format ...
    what should I write in the box is so modules dasylab Digital meters can be read from CN606TC2.
    To start communication, the Command Module must send alert code ASCII [L] hex 4C. 
    Commands requesting data from the scannerCN606TC2)
     ASCII [A] hex 41 = Zones/ Alarms/ Scan time
     ASCII [M] hex 4D = Model/ Password/ ID#/ # of zones
     ASCII [S] hex 53 = Setpoints
     ASCII [T] hex 54 = Temperature
    I did not understand the program and the ASCII code.
    I send [T] in the RS232 monitor and restore dasylab 54.
    I am very grateful for your help

  • NI USB 6211 : sample rate and sample read

    Bonjour,
    je viens vers vous car je souhaite faire une acquisition continu d'un signal d'une entrée analogique. Sur cette entrée analogique est branchée un accéléromètre et un transducteur qui me renvoie donc une tension (120Mv/g) . J'ai donc réalisé à l'aide d'un exemple labview un programe. Cependant, celui-ci est très très long .. Lorsque je varie la tension d'entrée, cela prend plusieurs secondes avant d'afficher la valeur exact. Une erreur 200279 s'affiche lrosque les paramètres de sample read et sample rate ne correspondent pas... j'ai été voir sur le net sur ce lien :
    http://digital.ni.com/public.nsf/allkb/AB7D4CA8596​7804586257380006F0E62
    Mais rien n'y change. je vous laisse mon Vi en pièce jointe... cordialement
    Attachments:
    Pièces jointes :
    Test 1.vi ‏28 KB

    Doublon et résolu ici
    Valentin
    Certified TestStand Architect
    Certified LabVIEW Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Travaux Pratiques d'initiation à LabVIEW et à la mesure
    Du 2 au 23 octobre, partout en France

  • Sample Rate and "Smart Encoding Adjustments"

    Wondering if someone could help me out with this...
    Is there a reason to choose a higher sample rate over a lower one when importing? Does it improve the audio quality? Or should I just put it in the auto setting?
    Also, what does the "smart encoding adjustments" option mean? (In the "custom" settings for mp3 format)
    I'm basically trying to get my music onto my HDD at the top quality possible, so I'm trying to figure all this out.
    Thanks.

    The info below should give you a start on the concepts. Google can find many more facts, opinions, and misconceptions about Lossy vs. Lossless music formats. Way too much information to be listed here. Do several searches with various keywords.
    Song file size is a factor of bit rate and song length. Audio quality is a factor of bit rate and encoding format. AAC and MP3 formats are considered Lossy, as they sample the target music file and reduce the total size with some reduction of audio quality. Lossless files are considered CD replicants as they contain all the digital data on the original audio CD. They can be fairly large in comparison to the traditional Lossy file.
    Encoding a music file into a Lossy compression format will strip details from the file. Transcoding from one Lossy compression format to another Lossy format will compound the loss of details from the file. (eg: transcoding a sound file from: AAC to MP3; or MP3 to AAC). The audio degradation becomes more apparent when transcoding files ripped at lower bit rates (less than 192kbps).
    When you burn an AAC file to CD and then re-rip the CD as AAC or MP3, the sound you end up listening to will have gone through a lossy compression process twice. Those losses can add up, taking what were only mild or even unnoticeable deviations from the original sound after the first phase of compression and making those deviations much more noticeable and objectionable. This is especially true if you try to take music at a low bit rate like 128 kbps (what Apple uses for iTMS) and try to compress back down to the same low bit rate.
    The preferred method is to save all audio "masters" in a Lossless audio format such as Apple Lossless, WAV, AIFF or FLAC (or the original CD), and then transcode directly from the Lossless source file to your preferred Lossy format such as MP3 or AAC. This procedure preserves as much of the original audio signal as possible and prevents the compound loss of audio details from the file.
    The generally accepted theory is that AAC/128 sounds as good as, or better than MP3/160 (and possibly even MP3/192). Transcoding your AACs/MP3s will most likely result in noticeable audio quality degradation. But -- test it out for yourself. If you cannot hear the difference, then it may be acceptable. Bear in mind that any improvements &/or upgrades in equipment (iPods, headphones, your ears, etc.) may uncover the additional audio limitations you created at a later date.
    See: Choosing an Audio Format

Maybe you are looking for