Time loop accuracy

Hi,
I haver some problems with the time loop structure. I try to get digital pulses of a width down to 50 microseconds! I have a real time OS on an PXI 8176 RT controller. For only one digital channel, this is not a problem, but the more channels I use in the time loop, the worse it gets. Is there any solution? What is the accuracy of time loops?
Thanks
Tim

Hello Tim,
please post an example what you are trying to do. So it is easier to understand your problem.
On the other hand it sounds reasonable:
The more number of channels you try to handle the higher system load you will get ...
Regards,
Thomas.
Message Edited by Tobel on 01-17-2006 04:31 AM

Similar Messages

  • Processing in Time Loop with microseconds.

    Hello,
    I am make one application in Labview where I capture the frame of one camera and I processing the image. The camera is communicate with the board NI PCIe 1433 in the PC with camera link. This camera can work with one frame rate of 2000 fps. For processing every frame in the pc I need one time loop that work with microseconds. The problem is that this time loop only work with millisecond in Windows. If I use the module Real Time, Can I configure this time loop with microsecond or are there other solution?
    Thanks

    Thanks,
    The purpose of this is controller the application. In this timeloop I obtain the frame and I the processer. If I configure the timeloop with 1 ms I lose frames and if I configure the application with one normal loop I can obteain several time the same frame. The frame rate can change and with this change I configure the timeloop for controller all the time the captures, not lose any and not obtain replay.
     

  • Time-domain accuracy of Audition's filter functions

    Sorry to start a new thread. This is an intended reply to previous, but the reply buttons are not functional. (Not a problem listed in FAQs as far as I could find. Perhaps mods can clarify?).  Anyway, the original thread, for context:
    >> StringTheoryNYC wrote:
    >>What is the best way to maintain time-domain accuracy when signals are filtered?  IOW, avoid delays in signal within the
    >> pass band.
    >> I understand that this is possible with the use of FFT filters, but is that the case with Audition's FFT filter implementation?
    >> If not, then what kinds of delays can be expected when using FFT filters in near-brick-wall mode?
    >SteveG(AudioMasters) wrote:
    >Audition has several filter implementations, both IIR and FIR.
    >Near-brick-wall isn't a filter mode I've ever come across, so I can't
    >tell you directly about that, whatever it is. What I can do though is
    >point you at an online reference about digital filters which explains
    >approximately how they work, but not in mathematical detail, and
    >that's the only way you'll really understand why this is in fact a
    >very strange question - you'll have to figure that out for yourself.
    Thanks for your reply, Steve. I'm not sure why that would be a 'very strange question' though. 'Brick wall' refers to the sharp transition between pass band and stop band. It's a commonly used term. I prefixed with 'near', since I'd ordinarily moderate the transition to avoid  Gibbs phenomenon.
    I understand the relative merits of FIR vs IIR re consistent delays, and I would not consider using IIR in this case. The question was intended in context of Audition's filter implementations. It -is- possible to avoid delays by doing an FFT, applying coefficients to the FFT output, then IFFT. But I'm not sure there is more to the approach used in Audition's "FFT Filter". IOW, I was looking for insights on what delays (if any) to expect from the Audition "FFT filter". I hope that is more clear.

    StringTheoryNYC wrote:
    Sorry to start a new thread. This is an intended reply to previous, but the reply buttons are not functional. (Not a problem listed in FAQs as far as I could find. Perhaps mods can clarify?). 
    It's not mentioned because there's nothing at all wrong with it - as this reply attests...
    StringTheoryNYC wrote:
    I'm not sure why that would be a 'very strange question' though. 'Brick wall' refers to the sharp transition between pass band and stop band. It's a commonly used term. I prefixed with 'near', since I'd ordinarily moderate the transition to avoid  Gibbs phenomenon.
    It's a term certainly, and I know exactly what it is - but it's not a mode, as such. The mode of a filter would imply something about the structural content of the filter - IIR and FIR are modes, if you like - but brick wall only refers to settings.
    I understand the relative merits of FIR vs IIR re consistent delays, and I would not consider using IIR in this case.
    Ah, that's the crux of the problem. It's not even remotely possible to tell you what the effect would be, because we have no idea of what you are using it for - you will have to be a lot more explicit.

  • Time loop and events (again)

    Good morning,
    I still have problems with time loops and events.
    I have an event structure based on run time menu (two buttons: start stop).
    When I push start, an automatic sequence is started made of a state machine (so a while loop).
    I want to stop the process by means of presing stop in the run time menu. The problem is that the system doesn't respond when I press the button (it's not locked since I unchecked the option to stop front panel acitivity in the event structure configuration). I see that the pression of the stop command is made, but the code inside the event structure is not executed.
    How can I avoid this?
    Thank you,
    Fede

    Events don't react to changes in local and global varables, and that's usually a good thing.
    To trigger an event programmatically, you should write to a signaling property of the control assigned to the event, and the event will fire (even if the value does not actually change).
    Attached is a simple modification of my example that forces a stop of any ongoing measurments every 10 seconds using the above method.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    Prova Loop MenuIII.zip ‏15 KB

  • Time loop problem when add a parellel subvi in another time loop

    I have four time loops in parellel, they work find
    without any problem. However when I add a third party
    Vi as a subvi in parellel inside a another time loop.
    The first four time loops start working eraticly.
    Is there any way I can make them work independently
    time wise. There is no data transfering between any
    of the loops

    Hello
    The VI with the timing loop is set to cycle once every second, and all it does
    Is turn on a boolean light indicator off and on every second. It has a boolean
    inverter connected to the shift registers of the loop. Its priority is set to Normal.
    The third party VI interacts with a spectrometer hardware. The spectrometer measures
    The amount of light and transform the data collected by the spectrometer into digital information, which is then passes to LabView. LabView then compares the sample information to a reference measurement and displays the processed spectral information.
    The timing loops of the main VI is manually set by me the user.
    The timing loops would loose its sequence; the loops would run normal for a couple
    Of cycle the
    n either run fast or run slow. If I take out the third party subvi everything
    Runs fine
    I am using windows XP with LabView 7.1
    Attachments:
    Dr_Grady_Solinoid_HM_3.vi ‏129 KB
    USB2000_SUBVI.vi ‏238 KB

  • What is the maximum number of the parallel blocks or maximum pair of shift register used in FPGA VI time loop with cRIO 9104?

      I am writing a FPGA VI with module cRIO 9104, I need to do a lot of parallel execution of inside a timed loop. In total, I have used 20 pairs of shift registers in the timed loop, which simply means there is 20 parallel exceution across ONE time loop. However, it tells me that there is an over-used of the FPGA resources in the compliation report. One is the Slices and the other is the number 4 of LUTS ( see the attached file), would the compile will be sucessful if I change it to ALL block diagrams cascaded in series without using any shift register accross the timed loop? And do you think that it is reaosnable to have 20 pairs of shift register arranged in parallel inside a timed loop.
      Thank you!
    Attachments:
    afternoon912.jpg ‏2305 KB

    Your compilation report has 2 useful numbers to help answer your question, I'll summarize:
    Number of Slice Flip Flops = 27%
    Number of 4 Input LUT's = 100% *
    You are using the Timed Loop Shift registers to pass a value from one iteration of your loop to the next.  The LabVIEW FPGA compiler will use Slice Flip Flops to create this behavior in the FPGA fabric.  You are only using 27% of the Flip Flops, therefore the 20 Shift Registers are not your problem. 
    LUT's (Look Up Tables), which are the basic logic building block of an FPGA, are your problem.  You are using over 100% of the LUT's in this FPGA.  To fit in this FPGA you will have to reduce the amount of logic (and/or gates, additions, comparisons, etc.) in your design.
    -RB

  • Problema di drift usando una Time Loop

    Sto utilizzando un sistema con:
        Labview 2012,
        sistema operativo: XP (dovrà migrare su Seven)
        NI-PXI 8106 con chassis PXI-1045
        scheda seriale Fastcom FSSC della COMMTECH
    Devo utilizzare una comunicazione seriale sincrona e inviare ad intervalli di 1 msec un messaggio: a tal fine ho utilizzato una "Timed Loop" anziché una "wait";
    purtroppo, da un esame con l'oscilloscopio, emerge un jitter piuttosto ampio tra un messaggio e il successivo.
    Vorrei cortesemente sapere:
    1) è possibile migliorare questa situazione?  Se si, quali soluzioni sono possibili?.
    2) una delle soluzioni a cui pensavo è utilizzare un timer hardware per pilotare il Time Loop: posso, senza il toolkit simulation, realizzare questa soluzione? Se si come posso fare?
    Grazie.

    Hi
    How much is your drift?
    Bruno Costa
    Automation Engineer

  • Hi, I would like to ask about how to capture data from real time loop.

    Hi,
    Here is some overvier of my project:
    I have done real time control using labview 9.0. I used PID controller.
    In order to optimise this controller, I need to capture data from my sensor(input) and actuator (output).
    1. For example while real time control is running. I need to capture 1000 sample data (sensor(input) and actuator (output)).
    Then I will used these data for PID optimisation on the other loop without intefere my real time loop.
    2. When PID optimisation is completed, I will sent its PID parameter to real time control loop.
    3. These operation is done in parallel.
    Anybody can help me to solve these. Your idea may solve my problem.
    TQ

    Typically you will have to use RT FIFO or Queue communication to avoid any impact to your time critical loop.
    Best regards
    Christian

  • Time loop 中的数组问题

    1. time loop 设定每100ms对采集卡的6个通道进行一次单点采样,如果要将每次采集到的数据存到通道对应的数组中,应该如何设计框图?
    2. 在time loop 中有一个for 循环,也就是每次对6个通道进行10次采样,10次采样的数据取平均值作为采样数据,为什么每100ms只能进行一次循环?

    Hi.
    The only way to have a software-timed loop execute faster than 1kHz is to include code in it which will take precisely the time you need to execute, which is very very hard to determine. So, it seems like you would need LabVIEW RT to be able to use the 20MHz computer clock, or some extra hardware.
    I once controlled the iteration time of a while loop by doing an analog input task and using the DAQmx Read VI inside the loop set to the correct number of samples to read at a time.
    For example, if you set an analog input task to sample at 10,000 samples per second, and then you put the DAQmx Read VI inside the loop, set to read 100 samples at a time, then the loop executes every 10ms (which you could have achieved without the hardware, but is still an illustrating point).
    So, if you want a loop to run at 20kHz, you might be able to achieve it by configuring an acquisition at say, 60,000 samples per second, and read 3 samples at a time with each iteration of the loop.
    You must be aware though, that usually when you do such an acquisition, you read a lot more samples at a time, precisely to avoid having to execute the loop so often. Therefore, there is still a possibility that the computer might not be able to keep up at such speeds. You can test this method.
    I am attaching an image with sample code of what I mean.
    I think the cheapest DAQ board NI has is the PCI-6023E, which should work for this application.
    Hope this helps.
    Alejandro
    Attachments:
    FasterLoop.JPG ‏20 KB

  • While/time loop from pc clock

    Hi Guys
    I have some code that (currently) uses a while loop to periodically open a file and read from it (in its simplest form....and...the details are not germaine to this post)
    The requirements have now tightened and the loop (While or Timed??) must read the file once every two minutes based on the computer clock (accuracy is not too important, certainly not milliseconds)
    e.g. time is 11:35.... read file
          time is 11:36.. ..do nothing
          time is 11:37.....read file
          time is 11:38.....do nothing
         etc.   etc.
    In simpler terms (if thats possible)....have the loop perform the code inside it every two minutes with the time sequence coming from the pc clock.
    I've looked at timed loops (but never used them) and they seem a possibility
    So, could somebody show me an example of the best way to do this.
    Thanks for your assistance and best regards
    Ray
    Solved!
    Go to Solution.

    Hello Ravens Fan
    I have had an attempt (unsuccessfully) at your suggestion, but not much luck I'm afraid.
    I'm kicking myself because I should know how to do this.
    I have attached my attempt. Would you please take a look at it and see where I'm missing the next ingredient.
    Kind regards
    Ray
    Attachments:
    timed while loop.vi ‏8 KB

  • 2nd time Loop error in rejection reason/changes req

    Hello experts,
    I have created an wf with loop step.
    In that loop branch,m displaying a doc in approvers inbox, then i have put a decision step with 2 button - 'Approve' and 'Changes req'.
    For button Changes req, loop will continue.ok
    In tht changes req branch, i hv put a task of BO SOFM, method- create for approver so tht he can enter thr wht r the changes he wants.
    thn i hv put task as BO SOFM, method-display for initiator so tht he can see the changes req.
    thn doc will opened in edit mode for initiator to make req changes.
    thn it will agn go back to approver for approval.
    he will agn get the decision screen with 2 buttons.
    now while testing this....this all worked fine when approver clicked in button 'changes req' in decision box..after tht SOFM create worked fine.
    But during 2nd time whn approver is checking the doc 2nd time and still he again wants some other changes to be done so he will agn click on button 'changes req'. this 2nd time tht SOFM-create is not working and wf goin to error as -
    1. Notification of completion cannot be generated
    2. Problems occurred when generating a mail
    3.Error '9' when calling service 'SO_OBJECT_SEND'
    4.Error handling for work item 000000006050
    PLz suggest how this can be solved.
    Best Regards
    Nitin

    I think you might have to use the EDIT method of SOFM Business Object as you are going to change the existing document.
    Thanks
    Arghadip

  • Use the LabView time delay in a while loop instead of using the instrument inherent time loop.

    I have a zes lmg500 that I use. It has the option of continues measuring data, with a user control cycle loop (still using a while loop but the delay is coming from the instrument). The problem is that the loop time is not constant. Can I use a very short time (100ms) as the instrument cycle time but use a longer time delay in the while loop that collect the data?
    I know many instrument offer the same abilities, so the question is relevant to all of them.
    Thanks for any help
    Solved!
    Go to Solution.

    Does that measurement VI have built-in wait functionality and a timeout?
    If not, then you are simply polling the VI at 1 kHz, but I suspect that this is not the case or your Waveform Chart probably wouldn't look as you expect (getting 0's put in everytime measurement data is not available).
    I expect that if you have the instrument set up to send data at a fixed time interval (i.e. 100 ms), all the data that it is sending you will end up residing in your serial buffer (or the instrument driver will have taken it out and put it in another internal buffer), so if you come back to read it 1 sec later, then you will have 10 data points to go through.  Whether you can read them all at once with the read VI or have to call it a whole bunch of times until empty I can't say.
    What is your objective?

  • 1ms Time Loop / Event Trigger from Counter

    Hi.  I'm pretty new to LabView so I'm am hoping this is an easy question:
    I need to read the pulse signal from some reluctors (toothed wheels that pulse a signal with each passing tooth).  The amplitude of each pulse is above 2.2V so I was planning on using TTL counters.
    I need to record the status of 5 counters with each increment of one of the counters  For the RPM and tooth count, I can do this with a 1ms timed loop.
    Question:  Can windows successfully give a 1ms (accurate) timed loop?  I know this depends on how much I am doing inside the loop but, for now, I just need to read 5 counters and store the data with a timestamp.
    Is it possible to make a counter throw an event?  If the counter is incrememented, can LabView be notified to then go off and handle a block of code?
    Thanks for any help you can offer!

    I assume you also have an analog input card for the chasis?
    So, it's probably possible to get 1ms timing with software timed loops, however based on my quick check of the manual for your chasis
    http://www.ni.com/pdf/manuals/372780c.pdf (see section 2-2 for example)
    You can use an Analog Comparison Event or a PFI channel to trigger a sample (i.e. your generated pulse is the sample clock). This could allow you to use your pulse to trigger an analog sample. Now, this doesn't get you timing information (it just gets you the value of the analog input at the time the event occurs). To get a time, you can use the counters on the digital card in the same way. You have the counters be driven by a fast sample, on-board clock. Then you can use your pulse events again to sample that clock. So each pulse, you get one analog voltage sample and one time sample from the counter. The resolution on this counter will be great, depending on how fast your counters can be driven (sometimes NI counters can actually be driven faster than the fastest available clock on the card or chasis). 
    Or you can just do the software loop.

  • Time origin of "Global Start Time" in Output Node of Time Loop

    Hi All,
    Please, do you know how the time returned by Global Time Start (after converting in milliseconds) in the Output Node of a Timed Loop is related to the time returned by “Tick Count (ms)” (if the latter is read at the very start of the loop)?
    The look similar but they are not the same.
    Is it right that the time returned by Time in the Node of an Event Structure is the value of “Tick Count (ms)” when the event takes place?
    Have an awesome day,
    LucaQ

    Hi LucaQ,
    This is a really interesting question. To answer the easy part, the Tick Count and the Time event data node for an event structure both output the same thing, the ms clock value obtained from the OS.
    The Global Start Time node outputs a nanosecond-resolution clock value. I'm a little fuzzy on details, but my neighbors and I believe that this clock comes directly from the processor (or more directly at least). If you look at the differences between this nanosecond clock and the ms clock for the OS, you'll see that the OS ms clock "lags behind." For example, look at these values output by the 2 clocks:
    Global Start Time:
    335654376684899
    ms timer value:
    335650557
    If you scale the nanosecond value up to ms, you see that 335654376.7 is greater than 335650557.
    I believe this time offset can be accounted for if we assume that the OS (such as Windows) takes this nanosecond clock and derives a ms clock from it. In addition, it will take some time for that ms value to propagate through the various levels of software up to your end-user application level. The nanosecond clock comes directly from the processor, and the Timed Loop uses lower-level functions to obtain it. Because it has fewer layers of processing to pass through, it is a higher (more recent) number.
    In light of all this, I would use the Global Start Time only for comparing against other Timed Loop nanosecond clock values like Global End Time. Do not expect to scale it and subtract a ms clock value to get the time elapsed or anything like that.
    Jarrod S.
    National Instruments

  • High priority settings cannot precisely time loops?

    I have an loop that acquires data every n milliseconds and casually sets some digital output when a threshold has passed.
    This digital-out is quite time-critical thus I don't like to do a on-board-acquisition but get each data just in time.
    That's very fine with a 1ms loop-time as long as Windows NT has enough ressources. But if there's another time-consumptive process running, things get bad.
    I started a "search for file" and parallel my application. Now Windows shared the timeslices and I have a lot of acquisition-dropouts for 10-20 ms.
    So I set the VI-priority to "highest" and the WindowsNT-process priority for LabView to "realtime". At least the latter should get the VI all the time it needs.
    But LabView se
    ems to be fair enough to still share ressources and the dropouts continued.
    Can I set my LabView-application so that it gets absolute priority in Windows NT to have a constant acquisition?
    Thanks, Daniel

    You can go to hardware timed acquisition or LabVIEW-RT. Windows, Linux, Mac, etc. are not deterministic operating systems. There is a lot of information on "real-time" acquisition at NI's site. Try doing a search and seeing what your options are. Good Luck.

Maybe you are looking for

  • How can i get the correct text from the url which has the mulitp-languages?

    hi, all i built a application and deployed it on sun application server 8.1. when i request a url which include some multi-languages (for example http://xxx.com/servlet?value=XXX. the 'XXX' may be simply chinese. ), i got the ??? instead of the corre

  • Install fails without error message

    I see a few other threads with installation failures, but none match my symptoms. I have 64-bit CS5 Extended on Windows 7. I downloaded pixelbenderplugin_p2_64bit_090910.mxp and executed it to install.This brought up the CS5 Extension Manager as expe

  • IChat Problems

    I can have a video ichat session with most poeple, but when trying a chat with my mother I get the following message, any help would be most appreciated: Date/Time: 2007-05-02 20:07:51.142 -0500 OS Version: 10.4.9 (Build 8P2137) Report Version: 4 iCh

  • Complete table view controller with XML

    Hello, I'm from Brazil and am new to ios. Today I am making an example where I use a table view controller and fill it witharray (NSMutableArray). However I would do the same using XML so that now, I'm having so many difficulties to the point of not

  • Problem with the EQ setting for the music app

    I have an iPhone 6. Whenever I listen to music on my iPhone, I set the equalizer (EQ) in the settings app to Rock. However, whenever I stop playing music, and then begin playing music again a little while later, the EQ has reset itself to "Off". I am