Samples to Read and Rate (Hz)

In the DAQ Assisant window, there are two options for reading samples:
1. 'Samples to Read'
2. 'Rate (Hz)'
Could someone please define the meaning of these? the context help did not clearify the meaning of these for me.
Examples would help also:
for example right now I have the settings at:
Samples to Read: 1
Rate (Hz): 200
what does this mean in terms of samples read per second?
Am i recording data at 200 Hz?
Any help would be greatly appreciated.
Cheers,
Oliver
Solved!
Go to Solution.

What is your sample type (Continuous, Finite Samples, Single Sample)?
If you are using Continuous, then the sample rate is how often a sample is taken.  Not sure in the DAQ Assistant, but with the DAQmx Timing VI, the number of samples sets the buffer size.
If you are using Finite Samples, then the sample rate is the same as the Continuous, but the number of samples is how many samples you want in a single acquisition.  The DAQ card will not continue to gather samples between reads.  It only gathers the defined number of samples with each read.
If you are using Single Sample, then neither of those settings are useful.  The card will just take a single sample and be done.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

Similar Messages

  • Samples to Read y Rate

    Buenas tardes, estoy trabajando con la daq ni6009, la utilizo para medir las rpm de un motor de inducción, a través de un swicth óptico. Con un samples to read de 10k y un rate de 20k la lectura de la velocidad me sale aceptable, el problema es que me salen solo dos o tres datos por segundo y yo necesito 20 datos/ segundo (en un archivo). Al cambiar el simple to read y el rate toma mas datos (hasta 10/seg) pero el valor de la medición es erróneo y tiene muchos saltos.
    Ah los datos de la medición los guardo en un archivo de Excel, y al guardarlos aquí es que lo hace 2 datos/segundo; ya que la tarjeta si esta adquiriendo bien los datos con el simple to read(10k)  y el rate(20k)
    Que podría hacer para tener una buena medición y más datos por segundo?
    Gracias

    Buen día.
    Le adjunto mi Vi, y unas imágenes en el cual le explico cual es el problema.
    De antemano Muy agradecido por quien pueda ayudarme.
    Attachments:
    Pregunta.docx ‏139 KB
    Tesina2.vi ‏483 KB

  • DAQ Assistant: Clock Settings (Samples To Read, Rate) can affect signal readings?

    Dear all,
    I'm totally new to Labview, and recently I get confused with the equipment I'm dealing with.
    Technical details:
    The Labview version is 7.1; computer operate system is Windows XP;
    The equipment has a NI PCI-6220 and a 68 Pin Connector Block to read signals from the equipment;
    There are 4 channels in DAQ Assistant (2 pressure reading, 2 temperature reading);
    For the first pressure reading, Signal Input Range from 4m to 20m Amps;
    Clock Settings are Samples To Read = 5, Rate (Hz) = 20.
    Description of the problem:
    I use Labvew to monitor and record pressure readings and temperature readings. The Labview configuration was set up by my advisor several years ago. Recently, I found the pressure reading vibrated a lot; for example, 5.01 to 5.05 bar within a second. In order to get a stable pressure reading, my advisor suggested me to change "Clock Settings" in DAQ Assistant from Samples To Read = 5, Rate (Hz) = 20, to Samples To Read = 250, Rate (Hz) = 1000. In this case, she believed that since we increase sample numbers and sampling rate, we could have more data, and thus have stable pressure readings.
    At first I could have very stable pressure reading. The last digit (0.01) did not change within 20 seconds. However, somehow after a day the pressure reading became unstable and even worse than previous. (pressure reading vibrates from 5.01 to 5.30 within a second)
    This is not the worst case. We found that when we set Clock Settings: Samples To Read = 5, Rate (Hz) = 20, the pressure reading is about 8 bar. However, when we set Clock Settings: Samples To Read = 250, Rate (Hz) = 1000, the pressure reading is about 5 bar. In this case, we even don't know which pressure reading is correct.
    Labview records current, and transforms it into pressure reading. Thus my advisor tried to monitor the current reading by Labview, and she found the current reading changed when she changed the Clock Settings. (0.004 Amps (5 bar) when Samples To Read = 5, Rate (Hz) = 20; 0.005 Amps (8 bar) when Samples To Read = 250, Rate (Hz) = 1000)
    Since we only change the sample numbers and sampling rate, the average readings should still be similar. However, the reading are not similar. That is what confuses me.
    My questions are, if Clock Settings in DAQ Assistant could affect signal readings? If so, how it could affect the signal readings? What is the effect of "Samples To Read" and "Rate (Hz)"? How to determine these parameters to get the true pressure readings?
    Thank you very much for your help. Hope to have some feedbacks from you.
    Best regards,
    Cheng-Yu
    Energy and Mineral Engineering
    the Pennsylvania State University

    A 6220 cannot read a current, it can only read a Voltage, so you'll probably have some (or should have) a resistor accros the voltage input. (normally 50 Ohm for a 0-20 mA signal).
    My first step would be to measure this voltage with a multi-meter so you know what the actual voltage should be.
    Then I would read that same voltage with MAX (measurement and automation explorer) to make sure you have the right value
    Now about the changing of the voltage/current/pressure, how have you terminated the other signals? Have you provided a good earthing?
    If you sample with a high frequency (1 kHz), perform an FFT on the acquired data, I can imagine a dominant 50 or 60 Hz (depends on where you live) in the signal that might cause your problem.
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • Read and write existing xml file trouble !

    Hi everyone ! 
    i'm trying to save hight score of user in my game by using a xml file , in my xml file has element <best_score>0<best_score/>
    and my code to do it is : 
     XElement best=XElement.Load("Assets/best.xml");
                IEnumerable<XElement> loc = best.Elements("best_score");
                foreach (XElement elem in loc)
                    int check=Convert.ToInt32(elem.Value.ToString());
                    int score_user = Convert.ToInt32(scores.ToString());
                    if (score_user >= check)
                       hight_score.Text = score.ToString();
                         elem.Value = scores.ToString();
                         // what code to save this  xml file  after modify ?????
                    else
                        list_box.Text = check.ToString();
    i down know what method can help me save my xml file after update hight score of user in this case .  
    Anyone has an idea to solve it for me ? please , thanks you ! 
    sorry about my english ! 

    Hi,
    It is not recommended to update a file withing the project, so you need to store it in the IsolatedStorage.
    Here is a sample to read and write file in IsolatedStorage,
    XDocument xDoc;
    var file = Util.ReadFile("XMLFile.xml");
    if (file != null)
    xDoc = XDocument.Parse(file);
    var bestScore = xDoc.Element("best_score").Value;
    //do calculation
    xDoc.Element("best_score").Value = "200";
    Util.SaveFile("XMLFile.xml", xDoc.ToString());
    For the above code to work, you need to have a file named "XMLFile.xml" in your IsolatedStorage, with data as  "<best_scrore>200</best_scrore>"
    To read and write file:
    public static class Util
    public static void SaveFile(string filename, string data)
    try
    using (var store = System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication())
    using (var stream = new IsolatedStorageFileStream(filename, FileMode.Create, FileAccess.ReadWrite, store))
    StreamWriter writer = new StreamWriter(stream);
    writer.Write(data);
    writer.Close();
    catch (Exception)
    Debug.WriteLine("Couldn't save the file.");
    public static string ReadFile(string filename)
    try
    String data;
    if (!IsolatedStorageFile.GetUserStoreForApplication().FileExists(filename)) return null;
    using (var store = System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication())
    using (var stream = new IsolatedStorageFileStream(filename, FileMode.Open, FileAccess.ReadWrite, store))
    StreamReader stmReader = new StreamReader(stream);
    data = stmReader.ReadToEnd();
    stmReader.Close();
    return data;
    catch (Exception)
    return null;
    Pradeep AJ

  • Conflict between the saved data and the sampling rate and samples to read using PXI 6070e

    Hello, I am using PXI 6070e to read an analog voltage. I was sampling at 6.6 MHz and the samples to read were 10. So, that means it should sample 10 points every 1.5 um. The x-axis of the graph on the control panel was showing ns and us scale, which I think because of the fast sampling and acquiring data. I use "write to measurement file" block to save the data. However, the data was saved every 0.4 second and as 35 points data at the beginning of each cycle (e.g. 35 points at 0.4 sec and 35 at 0.8 sec, and so on) and there was no data in between. Can anyone help me how there are 35 reading points every cycle? I could not find the relation between the sampling rate and samples to read, to 35 points every 0.4 second!
    Another thing, do I need to add a filter after acquiring the data (after the DAQ assistant block)? Is there anti-aliasing filter is built in PXI 6070e?
    Thanks for the help in advance,
    Alaeddin

    I'm not seeing anything that points to this issue.  Your DAQ is set to continuous acquire.  I'm not sure if this is really what you want because your DAQ buffer will keep overwriting.  You probably just want to set to Read N Samples.
    I'm not a fan of using the express VIs.  And since you are writing to a TDMS file, I would use the Stream to TDMS option in DAQmx.  If you use the LabVIEW Example Finder, search for "TDMS Log" for a list of some good examples.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Sample per channel and sample to read

    Hello everybody
    I am new in LABVIEW and I have some difficulties with something.
    I don t know exactly what is the difference between the sample per channel et the sample to read. I believe knowing that the sample per channel is the size of the buffer which is more big than the sample rate but I don t know what is the sample to read.
    I ve tested with different sample per channel and sample to read. Sometimes I have an error and sometimes know and I would like to know why. If you have any example for I understand better, it will be great.
    I really need to understand this part for my project
    Thank for your help
    Romaric GIBERT
    Solved!
    Go to Solution.

    Hi Roro,
    As you mentioned, when acquiring continuous samples you can specify the sample buffer size by placing a value at the input "samples per channel" on the timing vi. The "number of samples per channel" input on the read vi which automatically names a control/constant with "samples to read" specifies the number of samples you wish to pull out of the buffer in one go when reading multiple (N) samples. This link may provide a bit more clarification. I have also attached a good example from the NI example finder which you may find useful to explore. I'm assuming you are using the DAQmx driver set so please let me know if this is not the case, but the same principles should apply either way.  
    This therefore means when sampling at a given rate, you need to ensure you are pulling data out in big enough 'chunks' to prevent the buffer from overflowing (which may well be causing the error you are seeing). Conversely if your sampling rate is slow and your read vi is having to wait for the number of samples to read you specified to be available, it may throw a timeout error. You can avoid this by either increasing your sampling rate, reducing your samples to read or increasing the timeout specified at the read vi input (-1 means it will wait indefinitely).
    Let me know if this helps and how you get on.
    All the best.
    Paul
    http://www.paulharris.engineering
    Attachments:
    Cont Acq&Graph Voltage-Int Clk.vi ‏27 KB

  • Buffered event counting. Why can't I explicitly sequence generating the Sample Clock Pulse and reading the counters?

    At irregular occasions I need to grab counts from several counters, and buffering the counts must be done simultaneously for all counters. I'm modeling my approach after zone.ni.com/devzone/cda/tut/p/id/5404 which someone kindly pointed out in an earlier thread. However, that example only uses one counter, and you can't test the synchronization with only one counter, so I am using two counters configured the same way, and they're wired to a single benchtop signal generator (for example at 300 kHz).
    What I want to do, I can test in a loop with a somewhat random wait in it. I want to drive a hardware digital output line high for a few ms and then low again. The hardware line is physically connected to terminals for my timing vi's Sample Clock Source and so will cause them to buffer their counts for later reading. After I pulse this line, when I know new good buffered counts await me, I want to read both my counters. If their bufferings are simultaneous, then each counter will have counted the same number of additional counts since the last loop iteration, which I can check by subtracting the last value sitting in a shift register and then subtracting the two "additional counts" values and displaying this difference as "Diff". It should always be 0, or occasionally +1 followed immediately by -1, or else the reverse, because buffering and a count could happen practically at the same moment.
    When I do this using a flat sequence to control the relative timing of these steps, so the read happens after the pulse, the counters often time out and everything dies. The lengths of time before, during, and after the pulse, and the timeout value for the read vi, and the size of the buffer and various other things, don't seem to change this, even if I make things so long I could do the counting myself holding a clipboard as my buffer. I've attached AfterPulse.vi to illustrate this. If I get 3 or 10 or so iterations before it dies, I observe Diff = 0; at least that much is good.
    When I use two flat sequences running in parallel inside my test loop, one to control the pulse timing, and the other to read the counters and do things with their results, it seems to work. In fact, Diff is always 0 or very occasionally the +/- 1 sequence. But in this case there is nothing controlling the relative timing such that the counters only get read after the pulse fires, though the results seem to show that this is true. I think the reads should be indeterminate with respect to the pulses, which would be unreliable. I don't know why it's working and can't expect it to work in other environments, can I? Moreover, if I set some of the pulse timing numbers to 1 or 2 or 5 ms, timeouts start happening again, too. So I think I have a workaround that I don't understand, shouldn't work, and shouldn't be trusted. See SeparateSequence.vi for this one.
    I also tried other versions of the well-defined, single sequence vi, moving the counter reads to different sequence frames so that they occur with the Sample Clock Source's rising edge, or while it is high, or with the falling edge, and they also often time out. I'll post these if anyone likes but can't post now due to the attachment limit.
    Here's an odd, unexpected observation: I have to sequence the reads of the counters to occur before I use the results I read, or else many of the cycles of this combine a new count from one counter with the one-back count from the other counter, and Diff takes on values like the number of counts in a loop. I though the dataflow principle would dictate that current values would get used, but apparently not so. Sequencing the calculations to happen after the reads fixes this. Any idea why?
    So, why am I not succeeding in taking proper control of the sequence of these events?
    Thanks!!!
    Attachments:
    AfterPulse.vi ‏51 KB
    InSeparateSequence.vi ‏49 KB

    Kevin, thanks for all the work.
    >Have you run with the little execution highlighting lightbulb on? -Yes. In versions of this where there is no enforced timing between the counter and the digital line, and there's a delay inserted before the digital line, it works. There are nearly simultaneous starts on two tracks. Execution proceeds directly along the task wire to the counter. Meanwhile, the execution along the task wire to the digital high gets delayed. Then, when the digital high fires, the counter completes its task, and execution proceeds downstream from the counter. Note, I do have to set the timeout on the counter longer, because the vi runs so slowly when it's painting its progress along the wires. If there is any timing relationship enforced between the counter and the digital transition, it doesn't work. It appears to me that to read a counter, you have to ask it for a result, then drive the line high, and then receive the result, and execution inside the counter has to be ongoing during the rising line edge.
    >from what I remember, there isn't much to it.  There really aren't many candidate places for trouble.  A pulse is generated with DIO, then a single sample is read from each counter.  -Yup, you got it. This should be trivial.
    >A timeout means either that the pulse isn't generated or that the counter tasks don't receive it. - Or it could mean that the counter task must be in the middle of executing when the rising edge of the pulse arrives. Certainly the highlighted execution indicates that. Making a broken vi run by cutting the error wires that sequence the counter read relative to the pulse also seems to support that.
    >Have you verified that the digital pulse happens using a scope? -Verified in some versions by running another loop watching a digital input, and lighting an indicator, or recording how many times the line goes high, etc. Also, in your vi, with highlighting, if I delete the error wire from the last digital output to the first counter to allow parallel execution, I see the counter execution start before the rising edge, and complete when the line high vi executes. Also, if I use separate loops to drive the line high and to read the counter, it works (see TwoLoops.vi or see the screenshot of the block diagram attached below so you don't need a LV box). I could go sign out a scope, but think it's obvious the line is pulsing given that all these things work.
    >Wait!  I think that's it!  If I recall correctly, you're generating the digital pulse on port0/line0...  On a 6259, the lines of port 0 are only for correlated DIO and do not map to PFI. -But I'm not using internal connections, I actually physically wired P0L1 (pin 66) to PFI0 (pin 73). It was port0/line1, by the way. And when running some of these vi's, I also physically jumper this connection to port0/line2 as an analog input to watch it. And, again, the pulse does cause the counter to operate, so it clearly connects - it just doesn't operate the way I think it is described operating.
    For what it's worth, there's another mystery. Some of the docs seem to say that the pulse has to be applied to the counter gate terminal, rather than to the line associated with the sample clock source on the timing vi. I have tried combinations of counter gate and or sample clock source and concluded it seems like the sample clock source is the terminal that matters, and it's what I'm using lately, but for example the document I cited, "Buffered Event Counting", from last September, says "It uses both the source and gate of a counter for its operation. The active edges on the gate of a counter is used to latch the current count register value in a hardware register which is then transferred via Direct Memory Access...". I may go a round of trying those combinations with the latest vi's we've discussed.
    Attachments:
    NestedSequences.png ‏26 KB

  • Sysgen : How to read the input port data type, width and rate dynamically in a masked subsystem ?

    Hello everybody,
         I am designing a general purpose block in system generator. I pass the user parameters to the block through masking it. Some user parameters can change the block configuration. The input port data type, width and rate can also affect the block configuration.
         The problem is that these values (input port data type, width and rate) are subject to change. So I should read them dynamically, then change the block configuration through programming the "Initialization Commands" field. But unfortunately there is no straight way to read the input port information.
         There are some methods in for example the "Black Box". these are:
    input_width = this_block.port('din').width;
    input_rate = this_block.port('din').rate;
    But these methods are not applicable to a masked subsystem.
    I have tried other ways also. You can find them below. None of them worked.
    Does anybody know how can I solve this problem?
    Other ways I tried:
    1)
    design_name([],[],[],'compile')                                       
    q=get_param(gcb,'PortHandles');
    get_param(q.Inport,'CompiledPortDataType')
    get_param(q.Inport,'CompiledPortWidth')
    get_param(q.Inport,'CompiledPortDimensions')
    design_name([],[],[],'term')
    2)
    ssGetInputPortDataType
    3)
    ts = Simulink.Block.getSampleTimes([gcb '/Input'])
     

    Today we rely on Simulink to perform parameterization of your designs in two ways:
    Parameterizable Subsystems and Blocks : Parameters themselves can be MATLAB expressions that need to be evaluated for which we need the MATLAB interpreter
    The very useful Rate and Type propagation or Simulink compilation that allows us to specify types & rates in one location that gets systematically propagated to all.
    To truly make the HDL Netlist that is generated from SysGen parameterizable, we would have to implement some of this capability in the HDL netlist itself by:
    Using Generics(VHDL) or Parameters(Verilog) - We would have to capture the bit width(type) propagation through levels of hierarchies and finally parameterize the IP itself based on this value
    Since IP itself does not have this capability through generics, we would have to package a separate tcl script that updates the IP parameterization appropriately in response to top level parameters(or GUI parameters)
    Interpreting MATLAB expressions and translating them into VHDL/Verilog expressions (alternatively tcl expressions of IP). In simulink, mask parameters can be passed from one level to the next. Also parameterization of a block can be composed of Matlab expressions using variables from ancestor masks & the MATLAB interpreter – so we will need to somehow capture that as well.
     

  • How to use ni-6008 and build a four channel data acquisition at a rate of 250 samples per channel and display all the data in a waveform chart

    how to use ni-6008 and build a four channel data acquisition at a rate of 250 samples per channel and display all the data in a waveform chart 

    Hi kdm,
    please stick in one thread for the same topic!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Problème NI usb 6211, sample rate et sample to read

    Bonjour,
    je viens vers vous car je souhaite faire une acquisition continu d'un signal d'une entrée analogique. Sur cette entrée analogique est branchée un accéléromètre et un transducteur qui me renvoie donc une tension (120Mv/g) . J'ai donc réalisé à l'aide d'un exemple labview un programe. Cependant, celui-ci est très très long .. Lorsque je varie la tension d'entrée, cela prend plusieurs secondes avant d'afficher la valeur exact. Une erreur 200279 s'affiche lrosque les paramètres de sample read et sample rate ne correspondent pas... j'ai été voir sur le net sur ce lien :
    http://digital.ni.com/public.nsf/allkb/AB7D4CA85967804586257380006F0E62
    Mais rien n'y change. je vous laisse mon Vi en pièce jointe... cordialement
    Attachments:
    Test 1.vi ‏28 KB

    Bonjour Geoff54,
    Tu es sur le forum international donc si tu veux que quelqu'un te reponde tu devrais poser ta question en anglais.
    Sinon il éxiste un forum francophone : French Forums
    L'erreur que tu rencontres vient du fait que le buffer du PC se remplit avec les données que tu acquiers mais tu ne le vide pas assez vite et lorsque le buffer est complet il te renvoie l'erreur -200279.
    Pour solutionner celà, il faut que tu viennes vider le buffer plus rapidement.
    Tu peux faire celà en augmentant le "sample to read" à plus que 1000 ou bien en laissant la valeur par défaut qui est -1.
    Bonne journée,
    Valentin
    Certified TestStand Architect
    Certified LabVIEW Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Travaux Pratiques d'initiation à LabVIEW et à la mesure
    Du 2 au 23 octobre, partout en France

  • Sampling local variable and synchroniz​e with DAQmx

    Hello, 
    I made a small change in the set-up I used with labview and now when I wanted to change the code I'm having a rather complicated problem.
    In my old set-up I was measuring three variables: x and y with a QPD and the power of a laser with a power detector. I was using the DAQmx and I was getting a matrix with three columns with n (sample rate) values. Now, for various reasons I had to take out the second detector. So now I want to build the same matrix as constructed before, but instead of putting the measured values of the laser power I want to put the theoretical values (they are in a local variable) as I cannot measure them. The problem is that this local variable, in general, changes during the DAQmx acquisition time and I would need to sample it at the same rate as I acquire the data from DAQ and then combine all them. How I could sample this variable and attach it to my DAQ results? DAQmx doesn't accept local variables.
    Thanks

    A local variable is not something standalone. It is always associated with a control or indicator. Hows is it updated?
    From your description, it is not clear what you are doing. Can you show us some code instead?
    (Also be more clear when using acronyms. QPD cound mean many things)
    LabVIEW Champion . Do more with less code and in less time .

  • Disk Transfer (reads and writes) Latency is Too High

    i keep getting this error:
    the Logical Disk\Avg. Disk sec/Transfer performance counter  has been exceeded.
    i got these errors on the following servers:
    active directory
    SQL01 (i have 2 sql clustered)
    CAS03 (4 cas server loadbalanced)
    HUB01
    MBX02(Clustered)
    a little info on our environment:
    *Using SAN storage.
    *Disks are new ,and working fine
    *the server has GOOD hardware components(16-32 Gb RAM;Xeon or quadcore........)
    i keep having these notifications everyday; i searched on the internet and i found the cause to be 1 of the 2:
    1) disk hardware issue( non common=rarely )
    2) the queue time on the hard-disk( time to write on the Hard-disk)
    if anyone can assist me with the following:
    1) is this a serious issue that will affect our enviroment?
    2) is it good to edit the time of monitoring to be 10minute(instead of the default 5min)
    3) is there any solution for this?(to prevent these annoying -useless??--- notifications)
    4)what is the cause of this queue delay;;and FYI sometime this happens when nothing and noone is using the server (i.e the server is almost Idle)
    Regards

    The problem is....  exactly what the knowledge of the alert says is wrong.  It is very simple.  Your disk latency is too high at times. 
    This is likely due to overloading the capabilities of the disk, and during peak times, the disk is underperforming.  Or - it could be that occasionally, due to the design of your disks - you get a very large spike in disk latency... and this trips the
    "average" counter.  You could change this monitor to be a consecutive sample threshold monitor, and that would likely quiet it down.... but only doing an analysis of a perfmon of several disks over 24 hours would you be able to determine specifically
    whats going on.
    SCOM did exactly what it is supposed to do.... it alerted your, proactively, to the possible existence of an issue.  Now you, using the knowledge already in the alert, use that information to further investigate, and determine what is the corrective
    action to take. 
    Summary
    The Avg. Disk sec/Transfer (LogicalDisk\Avg. Disk sec/Transfer) for the logical disk has exceeded the threshold. The logical disk and possibly even overall system performance may significantly diminish which will result in poor operating system and application
    performance.
    The Avg. Disk sec/ Transfer counter measures the average rate of disk Transfer requests (I/O request packets (IRPs)) that are executed per second on a specific logical disk. This is one measure of storage subsystem throughput.
    Causes
    A high Avg. Disk sec/Transfer performance counter value may occur due to a burst of disk transfer requests by either an operating system or application.
    Resolutions
    To increase the available storage subsystem throughput for this logical disk, do one or more of the following:
    •
    Upgrade the controllers or disk drives.
    •
    Switch from RAID-5 to RAID-0+1.
    •
    Increase the number of actual spindles.
    Be sure to set this threshold value appropriately for your specific storage hardware. The threshold value will vary according to the disk’s underlying storage subsystem. For example, the “disk” might be
    a single spindle or a large disk array. You can use MOM overrides to define exception thresholds, which can be applied to specific computers or entire computer groups.
    Additional Information
    The Avg. Disk sec/Transfer counter is useful in gathering throughput data. If the average time is long enough, you can analyze a histogram of the array’s response to specific loads (queues, request sizes, and so on). If possible, you should
    observe workloads separately.
    You can use throughput metrics to determine:
    •
    The behavior of a workload running on a given host system. You can track the workload requirements for disk transfer requests over time. Characterization of workloads is an important part of performance analysis and capacity planning.
    •
    The peak and sustainable levels of performance that are provided by a given storage subsystem. A workload can either be used to artificially or naturally push a storage subsystem (in this case, a given logical disk) to its limits. Determining these
    limits provides useful configuration information for system designers and administrators.
    However, without thorough knowledge of the underlying storage subsystem of the logical disk (for example, knowing whether it is a single spindle or a massive disk array), it can be difficult to provide an optimized one size fits all threshold value.
    You must also consider the Avg. Disk sec/Transfer counter in conjunction with other transfer request characteristics (for example, request size and randomness/sequentially) and the equivalent counters for write disk requests.
    If the Avg. Disk sec/Transfers counter is tracked over time and if it increases with the intensity of the workloads that are driving the transfer requests, it is reasonable to suspect that the logical disk is saturated if throughput does not increase and
    the user experiences degraded system throughput.
    For more information about storage architecture and driver support, see the Storage - Architecture and Driver Support Web site at
    http://go.microsoft.com/fwlink/?LinkId=26156.

  • Urgent: Please help. how to solve the time delay during buffer read and write using vc++

    I need to continuously acquire data from daqmx card, write into a file and at the same time corelate (in terms of time) the data with signals from other instruments. The current problem is that there is time delay during read and write data into buffer,thus causing misalignment of the data from multiple instruments. Is there a way to solve  the delay? or Is there a way to mark the time of the data acquisition in the buffer?  If I know the starting time (e.g. 0) of data acquisition and sampling rate (e.g. 1kHz), can I simply add 1ms to each data sample in the buffer? The current code is shown below.
    void DataCollectionWin::ConnectDAQ()
    DAQmxErrChk(DAQmxCreateTask ("", &taskHandle));
        DAQmxErrChk(DAQmxCreateAIVoltageChan(taskHandle,"Dev1/ai0,Dev1/ai1,Dev1/ai2,Dev1/ai3,Dev1/ai4,Dev1/ai5,Dev1/ai16,Dev1/ai17,Dev1/ai18,Dev1/ai19,Dev1/ai20,Dev1/ai21,Dev1/ai6,Dev1/ai7,Dev1/ai22","",DAQmx_Val_Cfg_Default,-10.0,10.0,DAQmx_Val_Volts,NULL));
      DAQmxErrChk(DAQmxCfgSampClkTiming(taskHandle,"",1000.0,DAQmx_Val_Rising,DAQmx_Val_ContSamps,60000));
      DAQmxErrChk (DAQmxRegisterEveryNSamplesEvent(taskHandle,DAQmx_Val_Acquired_Into_Buffer,50,0,EveryNCallback,NULL));// Every 50 samples the EveryNSamplesEvent will be trigured, to reduce time delay.
      DAQmxErrChk (DAQmxRegisterDoneEvent(taskHandle,0,DoneCallback,NULL));
      DAQmxErrChk (DAQmxStartTask(taskHandle));
    int32 CVICALLBACK EveryNCallback(TaskHandle taskHandle, int32 everyNsamplesEventType, uInt32 nSamples, void *callbackData)
     DAQmxErrChk (DAQmxReadAnalogF64(taskHandle,50,10.0,DAQmx_Val_GroupByScanNumber,data,50*15,&read,NULL));
       //memcpy(l_data,data,6000);
      SetEvent(hEvent);
    l_usstatus_e[0]=g_usstatus[0];// signals from other instruments that need to be corelated with the data from daq cards.
     l_optstatus_e[0]=g_optstatus[0];
     if( read>0 ) // write data into file
       //indicator=1;
     for (i=0;i<read;i++)
     {  //fprintf(datafile,"%d\t",i);
      fprintf(datafile,"%c\t",l_usstatus_s[0]);
      fprintf(datafile,"%c\t",l_usstatus_e[0]);
            fprintf(datafile,"%c\t",l_optstatus_s[0]);
      fprintf(datafile,"%c\t",l_optstatus_e[0]);
              fprintf(datafile,"%.2f\t",data[15*i]);
     //   sprintf( pszTemp, "%f", data[6*i]);
     // pListCtrl->SetItemText(0, 2, pszTemp);
        //pWnd->m_trackinglist.SetItemText(0, 2, pszTemp);
         fprintf(datafile,"%.2f\t",data[15*i+1]);
         fprintf(datafile,"%.2f\t",data[15*i+2]);

    Hello kgy,
    It is a bit of a judgment call. You should just choose the board that you think has the most to do with your issue. For example, this issue was much more focused on setting up your data acquisition task than the Measurement Studio environment/tools, so the MultifunctionDAQ board would have been the best place for it. As for moving your post to another board, I do not believe that is possible.
    Regards,
    Dan King

  • Samples to read option in DAQ

    Hello,
              I am new to signal processing and confused about the DAQ Assistance option "samples to read".
    What I understand is about
     Sampling rate indicates the samples read per second, I run my vi for a second and I got 10K samples when my sampling rate is 10k. Frequency resolution is also depend on this sampling rate and Samples to read.
    My settings in DAQ Assistance are
    continuous acquisition, Samples to read--1K, sampling rate--10K and I run my Vi for exactly one second and I got 10K samples but I am confused about "Samples to read". What it is exactly?
    Thank you.

    You can't write "nothing" in that box. The value must be 1 or greater.
    It is a bit confusing, because in Continuous Mode, it really isn't Samples to Read, it is a range of buffer sizes - which was described in the previous reply.
    Check out the HELP >>> Buffer Size
    How Is Buffer Size Determined?
    Input Tasks
    If your acquisition is finite (sample mode on the Timing function/VI set to Finite Samples), NI-DAQmx allocates a buffer equal in size to the value of the samples per channel attribute/property. For example, if you specify samples per channel of 1,000 samples and your application uses two channels, the buffer size would be 2,000 samples. Thus, the buffer is exactly big enough to hold all the samples you want to acquire.
    If the acquisition is continuous (sample mode on the Timing function/VI set to Continuous Samples), NI-DAQmx allocates a buffer equal in size to the value of the samples per channel attribute/property, unless that value is less than the value listed in the following table. If the value of the samples per channel attribute/property is less than the value in the table, NI-DAQmx uses the value in the table.
    Sample Rate
    Buffer Size
    No rate specified
    10 kS
    0–100 S/s
    1 kS
    100–10,000 S/s
    10 kS
    10,000–1,000,000 S/s
    100 kS
    >1,000,000 S/s
    1 MS
    Note  For performance reasons, the default buffer size for continuous acquisitions differs slightly when logging is enabled.
    You can override the default buffer size by calling the Input Buffer Config function/VI.
    NI-DAQmx does not create a buffer when the sample mode on the Timing function/VI is set to hardware-timed single point.
    Note  Using very large buffers may result in diminished system performance due to excessive reading and writing between memory and the hard disk. Reducing the size of the buffer or adding more memory to the system can reduce the severity of these problems.
    Richard

  • PLEASE can a AE from NI take a look at my problem. Sound input read behave in strange manner then the buffer size is larger than 2X number of samples to read.

    On my computer I have discovered some strange behavior then reading data from the sound card. Then the buffer size is 2x samples to read everything is as expected. But since I read the sound card 10 times pr second I feel a .2 second buffer is to small. I am using XP, and XP is not a RTOS so with a buffer set to 0.2 seconds I may lose data. Therefore I set the buffer size (number samples/ch on Sound Input Configure.vi) to be in range of 2 seconds. The result then is that then reading from Sound input.vi, a reading often take more than 0.1 second. On my computer it is often 500mSec. Then the next 5 read follows with almost zero interval. I do not loose data. But on my front panel the graphs looks like an very early silent movie. This error was introduced in Labview 8.x. To be honest I think the labview 7.x sound system was much better in many ways.
    But before I point any finger NI. Other people has to verify the behavior I experience. I have made an example showing this error. It is a modified version  of the "Continuous Sound Input.vi" example. Then the "buffer in seconds" control is set to 0.2 every thing works OK. Changer this to a larger number will produce the mentioned above hiccup. The larger number in this control the larger hiccup. Is it any way to fix this? My solution up to now has to use a free 3. part software(http://www.zeitnitz.de/Christian/index.php?sel=wav​eio) But I guess it soon will be outdated. It may not work with newer windows versions.
    Any help at all will be appreciated 
    And yes I have the most updated version fo DirectX. Also I se this in Labview 2009 which I have trail version of. The VI I have made is in 8.6
    Message Edited by Coq Rouge on 09-07-2009 10:54 AM
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Continuous Sound Input with timing.vi ‏23 KB

    macaba wrote:
    If you take a moving average of the 0.2s buffer vs. 3s buffer at an update rate of 10, then they are the same (just under 100ms), so the average refresh rate is the same. I agree that is odd behaviour that the time between sound reads go to zero quite a lot then take a long time once in a while (presumably to fill the buffer
    I guess it goes to zero because it is reading data from the buffer it do not has to wait for data from the sound card. The mysterious thing is the periodic delay. You are also correct then saying that average timing is correct. And in my application I have no data loss.
    If you search for sound in this forum you will find out that many people has reported trouble with the sound system.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)

Maybe you are looking for