Getting timestamp for analog acquisition with NI6110.

When I use the AI Read VI to get a waveform containing a sequence of scans, the waveform has a timestamp that presumably represents the absolute time of acquisition of the first scan as measured using the system clock. (I can't find any documentation on this, by the way. Where is this documented?) I notice that the AI Read VI can alternatively be used to return a "binary" array of measurements. For various reasons, I prefer to use this form of the AI Read VI, but I'd also like to get the timestamp that would be included in waveform returned by the waveform version. However, I don't want to call the AI Read VI twice, one time to get the binary array and a second time to get the whole waveform just so that I can g
et the timestamp. Is there a way to get this timestamp without getting the whole waveform?
Thanks,
Neal.

Thanks for replying.
I'm performing digitally triggered, buffered data acquisition with my 6110E, with the possibility of pretrigger sampling. The data I'm acquiring may extend several second before and after the trigger. Furthermore, my application may call the AI READ VI at some unpredictable time several seconds after data acquisition is complete. I'm assuming that the timestamp that would be returned as part of the waveform by the AI READ VI (whose exact nature appears to be undocumented) is calculated from the system clock at the time at which the digital trigger occurs, and wouldn't depend on when the AI READ VI was called. The article you referenced doesn't tell me how to do what I want. I want to get EXACTLY the timestamp that the AI READ
VI would include with waveform data, were it asked to return waveform data, but I want to use it to only return binary data. To the best of my understanding, the method described by the article for a obtaining a timestamp when binary data is retrieved provides no guarantee that the timestamp obtained will be the same one that would be returned as part of a waveform by the AI READ VI. Indeed, using the "Get Date/Time In Seconds" VI would seem, under the circumstances in which I would use it, to produce a timestamp that differs unpredictably from the time of the trigger by seconds.
Thanks,
Neal.

Similar Messages

  • I was recently burglarized and lost a bunch of Apple products. Is there a way to get receipts for items purchased with my apple id?

    I was recently burglarized and lost a bunch of Apple products. Is there a way to get receipts for items purchased with my Apple ID?

    Yes if you go to http://store.apple.com/ on the top right under the search it will say "Account" or your first name. You will click or tap that and you will click "check order status" sign in. After you do that in the middle by the order date and number it will say "print invoices" and as long as your item(s) have been ship you will be able to see and print your invoice receipts. you can view items purchased in the last 18 months.

  • How to structure the DMA buffer for PXie 6341 DAQ card for analog output with different frequencies on each channel

    I'm using the MHDDK for analog out/in with the PXIe 6341 DAQ card.
    The examples, e.g. aoex5, show a single Timer  (outTimerHelper::loadUI method), but the example shows DMA data loaded with the same vector size.
    There is a comment in the outTimerHelper:rogramUpdateCount call which implies that different buffer sizes per channel can be used.
       (the comment is: Switching between different buffer sizes will not be used)
    Does anyone know what the format of the DMA buffer should be for data for multiple channels with different frequencies ?
    For example, say we want a0 with a 1Khz Sine wave and a1 with a 1.5Khz sine wave.  What does the DMA buffer look like ?
    With the same frequency for each channel, the data is interleaved, e.g.  (ao0#0, ao1#0; ao0#1, ao1#1, ...), but when the frequencies for each channel is different, what does the buffer look like ?

    Hello Kenstern,
    The data is always interleaved because each card only has a single timing engine for each subsystem.
    For AO you must specify the number of samples that AO will output. You also specify the number of channels. Because there is only one timing engine for AO, each AO will channel will get updated at the same time tick of the update clock. The data will be arranged interleaved exactly as the example shows because each AO channel needs data to output at each tick of the update clock. The data itself can change based on the frequency you want to output.
    kenstern wrote:
    For example, say we want a0 with a 1Khz Sine wave and a1 with a 1.5Khz sine wave.  What does the DMA buffer look like ?
    With the same frequency for each channel, the data is interleaved, e.g.  (ao0#0, ao1#0; ao0#1, ao1#1, ...), but when the frequencies for each channel is different, what does the buffer look like ?
    In your example, you need to come up with an update rate that works for both waveforms (1 KHz and 1.5 KHz sine waves). To get a good representation of a sine wave, you need to update more than 10x as fast as your fastest frequency...I would recommend 100x if possible.
    Update Frequency: 150 KHz
    Channels: 2
    Then you create buffers that include full cycles of each waveform you want to output based on the update frequency. These buffers must also be the same size.
    Buffer 1: Contains data for the 1 KHz sine wave, 300 points, 2 sine wave cycles
    Buffer 2: Contains data for the 1.5 KHz sine wave, 300 points, 3 sine wave cycles
    You then interleave them as before. When the data is run through the ADC, they are outputting different sine waves even though the AO channels are updating at the same rate.

  • DSC 8.6.1 wrong timestamps for logged data with Intel dual core

    Problem Description :
    Our LV/DCS 8.6.1 application uses shared variables to log data to Citadel. It is running on many similar computers at many companies just fine, but on one particular Intel Dual Core computer, the data in the Citadel db has strange shifting timestamps. Changing bios to startup using single cpu fixes the problem. Could possibly set only certain NI process(es) to single-cpu instead (but which?). The old DSCEngine.exe in LV/DSC 7 had to be run single-cpu... hadn't these kind of issues been fixed by LV 8.6.1 yet?? What about LV 2009, anybody know?? Or is it a problem in the OS or hardware, below the NI line??
    This seems similar to an old issue with time synch server problems for AMD processors (Knowledge Base Document ID 4BFBEIQA):
    http://digital.ni.com/public.nsf/allkb/1EFFBED34FFE66C2862573D30073C329 
    Computer info:
    - Dell desktop
    - Win XP Pro sp3
    - 2 G RAM
    - 1.58 GHz Core 2 Duo
    - LV/DSC 8.6.1 (Pro dev)
    - DAQmx, standard instrument control device drivers, serial i/o
    (Nothing else installed; OS and LV/DSC were re-installed to try to fix the problem, no luck)
    Details: 
    A test logged data at 1 Hz, with these results: for 10-30 seconds or so, the timestamps were correct. Then, the timestamps were compressed/shifted, with multiple points each second. At perfectly regular 1-minute intervals, the timestamps would be correct again. This pattern repeats, and when the data is graphed, it looks like regular 1-sec interval points, then more dense points, then no points until the next minute (not ON the minute, e.g.12:35:00, but after a minute, e.g.12:35:24, 12:36:24, 12:37:24...). Occasionally (but rarely), restarting the PC would produce accurate timestamps for several minutes running, but then the pattern would reappear in the middle of logging, no changes made. 
    Test info: 
    - shared variable configured with logging enabled
    - data changing by much more than the deadband
    - new value written by Datasocket Write at a steady 1 Hz
    - historic data retrieved by Read Traces
    - Distributed System Manager shows correct and changing values continuously as they are written

    Meg K. B. , 
    It sounds like you are experiencing Time Stamp Counter (TSC) Drift as mentioned in the KB's for the AMD Multi-Core processors. However, according to this wikipedia article on TSC's, the Intel Core 2 Duo's "time-stamp counter increments at a constant rate.......Constant TSC behavior ensures that the duration of each clock tick is
    uniform and supports the use of the TSC as a wall clock timer even if
    the processor core changes frequency." This seems to suggest that it would be not be the case that you are seeing the issue mentioned in the KBs.
    Can you provide the exact modle of the Core 2 Duo processor that you are using?
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • I am searching a VI for AC acquisition with a PCMCIA DAQ card 6024E

    Hi,
    I am searching a VI for Alternativ Current acquisition with a PCMCIA DAQ card 6024E.
    I have tried several availble examples but they are unfortunately not compatible with this card.
    I am a beginner with Labview, as a consequence any help and support on that topic would be really appreciated.
    Thanks,
    Alban Cotard.

    I think you wann measure Alternating Current?!?
    what want you know, the period, amplitude, rms.....?
    e.g. use the "cycle average and rms.vi"  or  other  VI in this palett
    regards timo

  • Monitor multiple channels for analog trigger with DAQmx drivers

    Hello! I would like to start a data acquisition of multiple analog channels (16) from an analog trigger. I would like trigger to monitor four of the (same) channels, and trigger when any one of them reaches a certain voltage. I found an example that would work with the Traditional DAQ drivers (using occurances), but can't figure out how to do something similar in DAQmx.
    Time is also an issue, as I would like to collect the first 80 milliseconds of data after the trigger (at a rate of 500,000 Hz).
    I'm using LabView 7.0 and collecting data off of two PXI-6133 cards.
    Thanks for your help!

    Hi Denise-
    After some research, I have found that it is not possible to use the functionality of DAQ Occurrences in DAQmx. Ironically, the reason that this functionality is available in Traditional and not DAQmx is due to the exploitation of an inherent limitation of Traditional that was upgraded in DAQmx. The multi-thread capability of DAQmx is a major advantage for most applications, but in this case it prevents the use of occurrences as they existed in Traditional DAQ.
    In short, this means that you can't directly use this functionality in DAQmx. You can however emulate this functionality with minimal software analysis of the incoming signal. I have attached a modified example VI that logs data to a chart only when the analog level of one of the channels being measured has exceeded a user-defined reference value. Basically, the task is running continuously in the background but the data is not actually logged until the signal is above a predetermined "trigger" level.
    Please let me know if the attached example is helpful for your application. You will see the input channels listed in the format "DevX/ai0:y" where X is the device number and y is the highest channel number of interest.
    Regards,
    Tom W
    National Instruments
    Attachments:
    Cont Acq&Graph Voltage-Int Clk Analog SW Trigger.vi ‏83 KB

  • Analogic acquisition with Labview and National Instruments card

    I need to acquire 2 analogic signals simultaneously. I use version 4
    (maybe 5) of Labview, with a National Instruments data acquisition
    card (BNC-208x). I want to save the 2 signals in a text file. Does
    anyone could send to me a diagram (vi) for doing that.
    Thank you very much.
    (my email: [email protected])

    True simultaneous sampling is possible with only a few types of DAQ cards. I believe BNC-208x is like the current BNC-2xxx series and is nothing more than a adaptor to the actual DAQ card inside the PC. If you can live with the small delay between channels that the majority of DAQ boards has, then there are a number of shipping examples that can help you. Look for the Cont Acq&Graph examples and Data Logger to Spreadsheet File.

  • Get calendar for one year with holidays and no works days

    Hi,
    i'm looking for a FM to get a calendar with all days and with holidays.
    Cheers

    I find the solution :
    FM CSCP_PARA1_GET_PERIODS and HOLIDAY_GET

  • I have internet service on iMac but cannot get connection for Blu ray with airport express.

    Why am I not getting internet connection for blu ray on tv with airport express?

    Which exact model of the AirPort Express do you have?
    Did the Blu Ray player connect wirelessly to the Express before or has it never been able to connect to it at all?

  • Digital triggering for analog acquisition on PCI-6024

    I would like to initiate an analog input scan when a digital line goes low using a PCI-6024 board. I connected the digital line to TRIG, and the analog line to AIN0. I tried using "Acquire N - Multi-Digital Trig.vi"...it almost works. It acquires a scan, but it may (randomly) start at either the rising or falling edge of the trigger, regardless of the rising/falling trigger edge setting. How do I get it to only acquire data on a falling edge?

    Dear Dave -- If you set up your task using MAX, you can specify whether you need to start acquiring at the rising or the falling edge. Using this task in your experiment should effectively take care of the issue.
    You can do the same from LabVIEW as well using the DAQmx Trigger VI and set the acquisition to begin at the rising or falling edge as you may choose it to be.
    HOpe this helps = VNIU

  • Getting Ready for Migration - Problem with BT ID

    Hi,
    I have read several e-mails from a recent thread on the forum which suggest that Migration of BT Yahoo e-mail accounts to the BT Mail service might resume sometime soon, and may even be completed by the end of April. Whether that will be so or not I guess we will have to wait and see. However, just in case, and for my account to be as ready as it can be, I need clarification of a particular potential problem.
    I used to have two phone lines, let us say Line 1 and line 2. Now I only have one 'active' line (Line 1). Line 2 was cancelled a long time ago and my Broadband transferred to Line 1.
    The above two lines had two seperate BT ID's (e-mail addresses). One being my Primary e-mail address which is one that ends @btinternet.com, but this remains the BT ID attached to Line 2 (the cancelled line).
    The second BT ID, for Line 1, now my only line, has a User Name (e-mail address) which ends @gmail.com. The use of this e-mail address as the User Name was configured by a person on the BT Support Desk several years ago, when trying to resolve an issue at the time.
    This morning I tried to change the User Name (e-mail address) on my BT ID for Line 1, so that it has the correct Primary e-mail address (ending @btinternet.com), I was unable to do so as the system said that this e-mail was already in use on another account (see above re cancelled line 2).
    I can presently see no way that I can remove/delete/edit the account that covers the old cancelled line, so that it 'frees up' the use of my Primary e-mail address (@btinternet.com), enabling me to use it as the BT ID on my only remaining line (Line 1).
    Is there a way I can cancel/delete/remove the now unused account? If not, is there a way for me to edit my Profile on Line 1, so that I can use my correct Primary e-mail as the ID, in preparation for the migration?
    I apologise if this sound messy, but I want my account to be as 'clean' as possible, so that there are no potential problems when it finally comes to migration taking place.
    Any help or advice would be much appreciated.
    Regards,
    Alan

    Hi,
    First of all, thanks to whiskywheels for the well-intended suggestion. I appreciated the thought, but in my experience there is not a chance that just a phone call to BT Support whould have solved the problem. It was BT Phone Support that in part are responsible for creating the problem I now find myself with.
    However, the reason for my coming back on this is to say that I had a phone call this afternoon from a MOD who I am absolutely certain was trying to be honest and helpful. The upshot of what she said is that no useful purpose would be served by changing my BT ID, as I only have one BT ID, not two as I intimated in my first e-mail in this thread (see message 1 of this thread for an explanation).
    I do have two accounts (one live - with the only phone line I have) and (one dead and now unused, although I can still access its information in MyBT).
    Following the MOD's advice, having nothing to this effect in writing, I will try not to quote her exactly (I don't want to fall foul of breaking any BT rules etc), but my understanding was that my Primary e-mail address, and several sub-accounts are all 'linked' to one BT ID she said I have, the one that ends [email protected] .
    My reason for raising this in the first place was to be as prepared as I can be for Migration to BT Mail when it finally comes along.
    I have however to admit to now being really confused. Maybe it is the case that I have confused what is an ID, and the use of e-mail addresses as an ID. What I mean by that is as follows:
    Using the e-mail tab on the BT.Com homepage the login asks for BT ID or Email Address, and then a Password. If I then use the one BT ID I am told that I have, then subseqently add the current e-mail password, I get a message which says "We don't recognise your details. Please check you've entered your full email address and password correctly and try again."
    When I try inputting my primary e-mail address (instead of the BT ID), and then the e-mail password as before (which did work ok this afternnon, but now will not) I currently get the predictable request to change my password. It's a nightmare to say the least, and fills me with no confidence whatsoever for when that fateful day of Migration finally arrives.
    If the login that comes up after pressing the e-mail tab asks for a BT ID, why will that not work, if as the MOD says I only have one BT ID not the two that I had thought. I could go on, but I am sure you get the point, confusion reigns!
    I would have much preferred that my BT ID was the same as my primary e-mail address, but the MOD in question said 'to the effect' that it was not necessary.
    Any thoughts or advice??
    Regards,
    AlanF

  • ServletContext is getting destroyed for unknown reasons with WL 8.1.4

    Our application is working fine untill we used WL 8.1.3. Recently we moved to WL 8.1.4. We have an issue with ServletContext as it is getting destroyed in one particular usecase. ServletContext is getting destroyed and reinitiated, which is causing nonSerializable exception.There is nothing obvious in our code which is causing servletContext to get destroyed.
              Is there any way to know the reason/how servletContext is getting destroyed.

    It might be helpful to use a ServletContextListener and print the stack trace from the destroy callback.
              -- Rob
              WLS Blog http://dev2dev.bea.com/blog/rwoollen/

  • Getting timestamps for SQL session without the trace file enabled

    hi, i have a clarification. Is there a method or way by which i can get the timestamps of an SQL session without the the trace file enabled , and if so please get me the details of this.
    thanks in advance.

    Hi,
    Don't very understand what do you want.
    SQL> set timing on
    SQL> select * from dual;
    D
    X
    Elapsed: 00:00:01.07
    SQL> Is this ?
    Nicolas.

  • Cannot get prompt for Lync webapp with IE

    When users join Lync conferences using a Meet Now link, the link is opened in IE, but IE automatically tries to join the meeting as the current user through the Lync desktop client. Since some external companies are not federated or do not allow open federation,
    users are unable to join the meeting through this method- the only way is for them to join anonymously as a guest through the Lync WebApp, but even with InPrivate browsing, IE defaults to open the meeting with the desktop client without any prompt. Fortunately
    Chrome's superior incognito mode is truly private and lets users choose whether to use the webapp or desktop client. Is there anyway for IE to let you choose every time whether the meeting is opened with the desktop client or webapp, or would settings have
    to be reset in the user's IE?

    Hi,
    Agree with Anthony.
    you can copy and paste the URL from your meeting invite into a web browser and add ?sl=1 to the end of the string. This will force the use of Lnc webapp even if lync is installed on that machine.
    More details:
    http://blogs.technet.com/b/messaging_with_communications/archive/2012/06/05/forcing-use-of-lync-web-app-to-join-a-conference.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Flexmotion single point acquisition with Analog buffered acquisition

    I am trying to read the ADC's with a 7344 motion card using a looping single point acquisition equivalent to 50Hz. This is occurring at the same time as I am performing a buffered analog acquistion on a PCI6023 E-series at 1000Hz.
    The values from the ADC's are stored in an array, but when I view this array there are hardly any data points, maybe about 20 in total.
    I am guessing that it is because of the buffered acquisition, but can anyone help me understand this problem a little better? Can I accomplish what I have described?
    Thanks
    Chris

    I checked and the Read Position VI loop runs properly while the motion profile is running (without any analog acquisition through my 6023E card).
    Likewise, with no read position loop, all the analog acquisition with motion runs properly.
    I have attached a simpler example that I tried, so that there was no triggering involved. The analog acquisition runs, and I find that the Read Position VI loop only starts displaying once the analog acquisition is finished (after 2000 samples/ 2 seconds).
    It seems that it cannot do the two processes at once and that the analog DAQ on my 6023E card has priority over the Read position loop????!!!!???
    The example is in LV5.1.1 format
    Any help would be much appreciated!
    Chris
    Attachments:
    simultaneous_DAQ_on_6023E_and_reading_7344_flexmotion_(simple_example).vi ‏149 KB

Maybe you are looking for