Acquire single point 12 bit data @ 200Khz using PXIe 6535 DIO RT

I want to acquire single point 12 bit data @ 200Khz using PXIe 6535 DIO, PXIe 1072 chasis and 8820 controller in RT. Problem is I am unable to acquire data as triggered input. Loop execution time takes ~10us (measured using rt tick count). Thus It misses the samples. Am I missing something? What are the proper ways to acquire digital data in RT?
Also I am wondering whether I can use the SMB connector of 8820 controller as my acquision trigger input pulse. I am completely new to RT. Any help will be appriciated.
Thank you.

Hi jtele1,
To make sure that the data gets written in the correctly order I would recommend monitoring the Time Out of the write. If a time out occurs you could stop writing all FIFOs and then start when all the Time Outs are no longer there. Another option is to look at your host side and determine if you can read larger chunks of data at a time and allow the host side to deal with processing the data. An additional option would be to look into high throughput streaming for FlexRIO. In this setup you will be writing the data directly to disk on your host side and then you could process the data at a later time. I have linked an example below, this example was giving me trouble so please let me know if you have trouble loading this example. Depending on your situation these may not all be acceptable options but you will need to ensure that you are not filling any of your FIFOs. Lastly, from what I can tell you are using a Windows OS as your host and in that situation you have no way of controlling when your LabVIEW application gets processor time. If you were to switch to a Real Time controller you would be able to ensure when certain tasks are run and add priority to tasks. Please let me know if you have further questions. 
High Throughput Streaming
https://decibel.ni.com/content/docs/DOC-26619
Patrick H | National Instruments | Software Engineer

Similar Messages

  • How can i acquire signal magnitude per every 0.2hz using pxi 4472

    helloi am using PXI 4472 card for FFT
    analysis using LabView2009
    i need to acquire the frequency Vs
    magnitude for my application
    Can anybody help me out to get
    voltage magnitude for every (df)0.2Hz up to 500Hz bandwidth.
    i could acquire the Voltage
    magnitude for every (df)0.5hz,
    with configuration continuous mode of 1000 sample
    rate and 1000 samples per channel and spectrum measurement Express VI with
    2000msec
    delay time (‘df’=0.5Hz=1/2000mSec), further less ‘df’ program is terminating
    intermittently
    Please give your suggestion

    Here is a good rule of thumb:
    Frequency Span = 0hz to sample rate divided by 2.56
    lines of resolution =  record block size divided by 2.56
    for example a 1khz sample rate and1k darta points is a 0 to 400hz spectrum (401 to 512 is calculated by not fully trusted in machinery world) and 400 bins or one bin per hz, df = 1. 
    so for this example if you need df to be 0.2Hz, you need five times as much data or 5000 data points. 
    Preston Johnson
    Principal Sales Engineer
    Condition Monitoring Systems
    Vibration Analyst III - www.vibinst.org, www.mobiusinstitute.com
    National Instruments
    [email protected]
    www.ni.com/mcm
    www.ni.com/soundandvibration
    www.ni.com/biganalogdata
    512-683-5444

  • Multiple data acquisition using PXI-5124 digitizer

    I have a system in which I do data acquisition of the present state of the physical system (using PXI-5124 digitizer),
    give the system some change (using labview program itself) and do data data acquisition again corresponding to 
    the changed state of the system.
    So I need to do two data acquisitions in one loop. How do I do this ? Labview examples have some programs (attached file) that
    does one data acquisition in one loop but I need to do two.
    I tried to make some changes to this program but it doesn't seem to work well.
    Thanks
    Attachments:
    PXI5124-data-acquisition.vi ‏46 KB

    From the explanation you have given about reconfiguring the digitizer,
    I think the example you are looking at is the one you want.
    The digitizer is reconfigured with new settings every time with in that loop,
    so the user can change the settings such as triggers and vertical range on the fly.
    The original state of the example showed how it was reconfiguring the digitizer,
    was there something else you wanted to add  to that example?

  • Amount of data transferred using PXI

    Hi,
    I'm new to XI, anyone can advise how to check amount of data has been transfered in the XI server?
    Regards
    CL

    If you want to monitor the time taken for a data which is being send or recieved in PI server,use Performance monitoring tool in PI.

  • Using LabVIEW and an E-Series DAQ Card to perform relatively high speed single point acquisition in response to a changing DIO pattern.

    I am using the DIO lines on my E-series card to drive an external multiplexer which switches 1 of 8 sets of 3 signals to channels 0,1 and 2 on my DAQ. I need to acquire the 3 single points of data, do a little processing, then update the mux code before acquiring the next 3 points of data and so on. I have been trying to do this using hardware controlled loops but can only achieve a real sampling rate (time between the same set of three signals) of about 200s/s. I am trying to achieve in excess of 800 s/s. Any ideas?.

    HI CP,
    You are doing pretty good if you are getting 200S/s.
    I believe the only way you can get 800 S/s reliably is to go to LV-Real Time. Not for the speed, but for the determinism.
    That's my idea.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best way to transfer single point data between loops on FPGA

    Hi,
        I used quite number of loops on FPGA and need to transfer single point datas between loops. Only the current value of data is needed so buffer is not necessary. I don't want use target scope FIFO since it require minimum 21 elements and I only need the current value. Is there other way other than local variables?
        Thanks for help!
        Regards,
       Tom

    Hi Godel,
    Since this discussion thread is over 3 years old, it would be better to start a new thread with your specific questions to get quicker help.
    I did a bit of research on your question, and found this helpful White Paper (http://www.ni.com/white-paper/7727/en/) that discusses the resources used for components in LV FPGA code. For local variables (although it depends on hardware), it looks like it uses Flip-flops and LUTs.
    As for using local variables with different clock rates, I found this KnowledgeBase article that might help shed some light on your question - some issues can arise from using them with more than one clock rate (http://digital.ni.com/public.nsf/allkb/C683585460E88508862570D1006B7434)
    Hope this helps! Again, if you have follow-up questions, I would definitely recommend creating a new thread
    Xavier
    Applications Engineering Specialist
    National Instruments

  • Data Federator XI 3.0 using DB2 VARCHAR FOR BIT DATA Column?

    We have a column in a DB2 database that is defined as VARCHAR(16) FOR 
    BIT DATA.
    We are using the suggested IBM JDBC driver, db2jcc.jar, against a DB2 
    OS/390 8.1.5 version database.
    The Datasource column displays a data type of NULL, indicating the DF 
    does not understand or cannot handle this IBM data type.
    We have two issues.
    First, target tables are not able to return any columns, regardless if 
    we exclude columns defined as NULL as mentioned above. We see the 
    'Wait' animation for a very long time when we use the 'Target table 
    test tool' option. Selecting to display the count only, returns zero.
    We are able to fetch and view non-NULL column data when using the 
    'Query tool' under the Datasource pane.
    I also get the same result when using the 'My Query Tool' in Server 
    Administrator; a selection agains the sources returns data while 
    selecting from a target table returns no data. Also, a 'select 
    count(*)' returns zero.
    The second issue is in mapping a relationship between two DB2 tables 
    where the join is between two columns of the above mentioned type 
    (NULL).
    The error we get back when we use "Show Errors" is "The types 
    'NULL' (in 'S1.PLANNEDGOALID') and 'NULL' (in 'S2.PLANNDEDGOALID') are 
    not compatible.". When reviewing the relationship, a dashed red line 
    appears instead of a solid grey line between the two tables in the 
    "Table relationships and pre-filers" section of our mapping pane.
    The following query returns an error via the Server Administrator 
    Query Tool; "Types 'NULL' and 'NULL' are not compatible for operator 
    '=' (Error code : 10248)".
    select count(*)
    from
    (select s1.CASEID, s2.PLANNEDGOALID, s2.NAME, s2.PLANNEDGRPSTTYCD
      from "/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" AS s1
    ,"/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" s2
              where s1.PLANNEDGOALID = s2.PLANNEDGOALID)
    Here are the properties settings in the Resource Connector Settings 
    for jdbc.db2.zSeries we are using.
    capabilities: isjdbc=true;orderBy=false
    driverLocation: drivers/db2jcc_license_cisuz.jar;drivers/db2jcc.jar
    jdbcClass: com.ibm.db2.jcc.DB2Driver
    sourceType: db2
    supportsCatalog: no
    urlTemplate: jdbc:db2://<hostname>[:<port>]/<databasename>
    Here are the Connection parameters as defined for the datasource in DF 
    Designer.
    Defined resource: jdbc.db2.zSeries
    Jdbc connection URL: jdbc:db2://DB2D03:50000/CMFSREPT
    Authentication: Use a specific database logon for all Data Federator 
    users.
    User Name: x
    Password: hidden
    Login domain: -- Choose a defined login domain --
    Supports Schema: checked
    Schema: is empty
    Prefix table names with schema name: checked
    Supports catalog: unchecked
    Prefix table names with the database name: unchecked
    Table types: TABLE and VIEW
    So, the following is the two questions we require answers for...
    Is this a limitation of Data Federator?
    Is there a work around short of changing the datatype in the database.

    Hi Darren,
    The VARCHAR() FOR BIT DATA is a binary data type and Data Federator does not support binaries. But if in your case, it makes sense to map this column to a VARCHAR data type you can configure the DB2 connector to view this column as a VARCHAR.
    Your column can be mapped explicitly to a data type of your choice using a property: castColumnType.
    This property can be set updating the resource you selected when you registered you DB2 data source.
    If the resource is "jdbc.db2", then:
    1. Launch Data Federator Administrator
    2. Click on "Administration" tab
    3. Click on "Connector Settings"
    4. Select the right resource: "jdbc.db2"
    5. Click "Add a property"
    6. Select "castColumnType"
    7. Set its value to: VARCHAR() FOR BIT DATA=VARCHAR
    8. Click on Ok
    You should see this column as a VARCHAR.
    Regards,
    Mokrane
    PS: For the target table issue, we have forwarded your mail to the Data Federator Designer team.

  • When the apple review team review our app,they point out that our  app uses a background mode but does not include functionality that requires that mode to run persistently.but in fact,when the app in background ,the app need data update to make the

    when the apple review team review our app,they point out that our  app uses a background mode but does not include functionality that requires that mode to run persistently。but in fact,when the app in background ,the app need data update to make the function of  trajectory replay come ture。in fact, we have added function when the app  is in background mode。we have point out the point to them by email。but they still have question on the background mode,we are confused,does anyone can help me,i still don't know why do review team can't find the data update when  the app is in background and how do i modify the app,or what is the really problem they refered,do i misunderstand them?
    the blow is the content of the review team email:
    We found that your app uses a background mode but does not include functionality that requires that mode to run persistently. This behavior is not in compliance with the App Store Review Guidelines.
    We noticed your app declares support for location in the UIBackgroundModes key in your Info.plist but does not include features that require persistent location.
    It would be appropriate to add features that require persistent use of real-time location updates while the app is in the background or remove the "location" setting from the UIBackgroundModes key. If your application does not require persistent, real-time location updates, we recommend using the significant-change location service or the region monitoring location service.
    For more information on these options, please see the "Starting the Significant-Change Location Service" and "Monitoring Shape-Based Regions" sections in the Location Awareness Programming Guide.
    If you choose to add features that use the Location Background Mode, please include the following battery use disclaimer in your Application Description:
    "Continued use of GPS running in the background can dramatically decrease battery life."
    Additionally, at your earliest opportunity, please review the following question/s and provide as detailed information as you can in response. The more information you can provide upfront, the sooner we can complete your review.
    We are unable to access the app in use in "http://www.wayding.com/waydingweb/article/12/139". Please provide us a valid demo video to show your app in use.
    For discrete code-level questions, you may wish to consult with Apple Developer Technical Support. When the DTS engineer follows up with you, please be ready to provide:
    - complete details of your rejection issue(s)
    - screenshots
    - steps to reproduce the issue(s)
    - symbolicated crash logs - if your issue results in a crash log
    If you have difficulty reproducing a reported issue, please try testing the workflow as described in <https://developer.apple.com/library/ios/qa/qa1764/>Technical Q&A QA1764: How to reproduce a crash or bug that only App Review or users are seeing.

    Unfortunately, these forums here are all user to user; you might try the developer forums or get in touch with the team that you are working with.

  • How to extract 64 bit data from imaq image using IMAQ Extract VI

    I have LV 8.5.1, Vision 8.5 and need to extract 64 bit data from a 64 bit image and I get the "invalid image" error while using the IMAQ Extract VI.  What version of Vision do I need to allow me to do this? 
    Currently, the work-around I have...
    1) convert the image to 32bit
    2) use the ROI tools I to get the rectangle data I need
    3) then go back to the original image and the convert the image to a 64 bit array
    4) take the rectangle data to extract the data needed out of the 64 bit array data.
    klunky but it works.  I would think that the IMAQ Extract tool should allow me to extract the 64 bit data but it doesnt... forces me to 32 bit.
    suggestions?

    steve05ram360 wrote:
    awesome, that does work. 
    Attached DLL slightly corrected and should be OK also "in place" when Dst is not connected like original IMAQ function. Hopefully it works properly now. By the way all IMAQ types are supported, not only U64.
    Andrey.
    Attachments:
    ADVExtractDLL.zip ‏9 KB

  • Pl/sql block reading reading table data from single point in time

    I am trying to figure out whether several cursors within a PL/SQL block are executed from within a Single Point In Time, and thus do not see any updates to tables made by other processes or procedures running at the same time.
    The reason I am asking is since I have a block of code making some data extraction, with some initial Sanity Checks before the code executes. However, if some other procedure would be modifying the data in between, then the Sanity Check is invalid. So I am basically trying to figure out if there is some read consistency within a PL/SQL, preventing updates from other processes to be seen.
    Anyone having an idea?.
    BR,
    Cenk

    "Transaction-Level Read Consistency
    Oracle also offers the option of enforcing transaction-level read consistency. When a transaction runs in serializable mode, all data accesses reflect the state of the database as of the time the transaction began. *This means that the data seen by all queries within the same transaction is consistent with respect to a single point in time, except that queries made by a serializable transaction do see changes made by the transaction itself*. Transaction-level read consistency produces repeatable reads and does not expose a query to phantoms."
    http://www.oracle.com/pls/db102/search?remark=quick_search&word=read+consistency&tab_id=&format=ranked

  • How can I find an example about acquiring waveform data by software trigger using PXI 4070 DMM?

    Anybody could proivde an example or simliar about acquiring waveform data by software trigger using PXI 4070 DMM?
    Thanks!

    hi there
    from the NI main page go to the developer zone http://www.ni.com/devzone/dev_exchange/ex_search.h​tm. select "LabVIEW" and "Digital Multimeter (DMM)" and search for "4070". then you'll find some examples.
    Best regards
    chris
    CL(A)Dly bending G-Force with LabVIEW
    famous last words: "oh my god, it is full of stars!"

  • Data Acquisition - using local variables to write data to a file

    Hello,
    I am running a Data Acquisition vi (currently in LabVIEW 7.1 but soon to be updated to 8.2) that collects ~100 parameters of data from several sources contained in a while loop. The current configuration (which I did not write) uses very few subVIs and writes to ~100 local variables to store each parameter. It then reads all the local variables and builds an array of all the strings, converts then to a spreadsheet string, then uses the write characters to file function to append to a datafile. I am trying to clean things up and have came up with subVIs to collect the data from the following sources:
    8 serial port sources collecting btwn 8 and 20 parameters each
    ~15 thermocouple readings
    ~10 analog inputs
    ~20 parameters read off an ARINC 429 bus.
    I have come up with a subVI to read each of the sources and have placed the subVIs in the while loop. Each subVI outputs the data that it collects in array or cluster form. I was wondering how best to write each parameter to a CSV file at between 1 and 10 Hz. Should I write each subVI output to a LV and then read them off as was done before (the difference being that I have reduced the # of LVs to ~10 vs >100?
    I should add that precise timing is not that important, so if all the subVIs are not collecting simultaneously (which I understand that they won't be), it does not really matter.
    Thanks.

    Hi jilla,
    jilla wrote:
    What I think that you are saying is to turn the outputs of the 4 subVIs into inputs of a 5th subVI that writes to the data file. Correct?
    Yes.  It may sound like a fine-point, but I beileve it's better to create a VI specifically for formatting data - in your example, 4 arrays IN, a single string OUT.  Then write the string to file as a seperate operation.  GUI-displayed data can go through a similar transformation, the four arrays wired to a subVI which builds output-structures specifically for display.  It's a beginner's mistake to put lots of individual controls and indicators on the screen when groups of them are naturally related (in an object-oriented sense.)  Use clusters to group related controls - this will keep the diagram much cleaner.
    One more question: at what point (either # of data points or frequency of data collection) does it become necessary to use queues? Thanks.
    Well, there's not really a clearly definable "point".  I'd say if your update-rate climbs above 100Hz, or you witness poor program or system performance, then it's time.  The scenario you've described is a fairly simple acquire/display&log loop - and simple is good.   Then-again people can't see/react-to updates faster than about 10Hz - so it doesn't make sense to sacrifice performance - if performance becomes an issue.
    Re: queues:  Queues are sometimes used to buffer data that's "produced" in one place and "consumed" in another.
    Here, if/when logging data, you're logging with every DAQ.  I wouldn't recommend using a queue to transport data from a "DAQ loop" to a "Logging-loop" - those functions can be in the same loop.  Should/could a queue be used to get data from a "DAQ loop" to update the GUI at a lower frequency?  Sure, but a Notifier might be a better choice.   Further, in the (simple?) program you've described, you might use a case structure (True/False) to only update FP indicators every "X" iterations - a simple solution that doesn't require Queues or Notifiers.
    Cheers!
    "Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)

  • "Read Single Point" Keithley 2400 Problems

    Hello!
    I am recentely having some problems with the "Read Single Point Measurements" function in Labview 2012. I have already sucessfully used my Keithley 2400 in other programs to sweep the voltage, but now I am trying to use it also to read the current across my sample. 
    I wrote a very simple program, which resemble the one you can find among Labview examples (see 1st attachment). The only differences are a sweep-subVI and a "for" cycle. My problem occurs at the "Read" Keithley function. On the Keithley's screen I have the following errors:
    - 113 : undefined header;
    - 230: data corrupt or stale;
    - 420: query unterminated.
    While in the Block diagram, when the system gets to the "read" function, the error -1074000000 shows up.
    I have found a lot of posts on this topic, but unfortunately I could not find the any solution for me. I also tried the 2nd attached program, to check the comunication to the instrument, but errors still occur. In "Measurements and Automatic", it says that the instrument is working properly. 
    Do you have any ideas?
    Thanks  
    Attachments:
    Keithley 24XX Sweep and Acquire Measurements.vi ‏26 KB
    Basic Serial Write and Read (1).vi ‏26 KB

    The following video demonstrates how to check the firmware revision: http://www.keithley.nl/centralized_display?mn=2400&assetid=55934.
    In order to update the firmware you will be need to download a flash program and the firmware file. gAfter obtaining the Flash program and the Firmware file from Keithley Application support staff, unzip the three files in a folder of your choice and run the "setup.exe" program.  Follow the instructions to install the program.  When the installation is complete, launch the program from your Windows Start menu under Programs/Keithley Instruments. The Application support staff will also provide the file for the latest firmware revision. Put that file in a folder where you can find it and then run the Flash Wizard32 program. The program will autodetect the instrument and request to specify the firmware file.
    This link contains the flash program: http://www.keithley.com/base_download?dassetid=52609
     See attactment for the firmware revision.
    Attachments:
    2400c30.zip ‏358 KB

  • How to efficientl​y create a single waveform based on data from two other waveforms?

    I have a 1-D array of waveform with size = 4 that contain "raw" potentiometer voltage data.  I need to manipulate waveform data from index 0 & 1 using the formula shown below to derive a single waveform of angle data.   I need to do the same for index 2 & 3 as this is a redundant circuit.   I was hoping that the formula node can work on entire arrays and although it can take an array as input, it requires me to index the array in the formula so it becomes a scalar value.  
    Since the formula is relatively complex, I'd like to keep it in text form but have it automatically work on each point of the two input arrays.  This math is done inline with pulling data out of a DAQ and so I need it to be as efficient as possible so that I don't spend too much time on it and potentially overflow the DAQ buffer. 
    The naive solution would be to wrap the formula nodes with for loops, but I don't know if this is this is an efficient way to do this.  I would appreciate any suggestions on how best to tackle this.  
    Thanks!
    Solved!
    Go to Solution.

    Here's how I would do it with no formula nodes or loops requied:
    If you want to use the formula node, then you could run a loop inside each formula node while you index through the arrays.  The performance difference between formula nodes and the graphical approach should be insignificant.  Note that my approach assumes that the array sizes are the same.  You could also create a sub VI to contain the math so you don't have to maintain two copies of the same piece of code.
    Chris M

  • Timing for Data Acquisition Using Notifiers

    I'm trying to use notifiers to transfer data from an acquisition (master) loop to the slave loop. I want data to be transferred for analysis only when the VI is in a certain state - not in all states. That's why I'm preferring to use notifiers instead of queues (I want all data collected during the other states to be disregarded). I have attached a simplified version of what I'm trying.
    The master loop generates a data point every second. The slave loop is in a "delay" state for 5 seconds and then in "acquire" state for 3 seconds. Given this architecture I would expect no data for 5 seconds and then 3 data points to be plotted during "acquire" state because only one data point is generated per second. But for some reason I get 5 or 6 data points during every acquire cycle. I haven't figured out why I get 5-6 data points instead of 3. It probably has to do with the timing functions I'm using.  
    Thanks!
    -Arnie 
    Attachments:
    Notifier Data Transfer Template.vi ‏62 KB

    Here is an example of a race condition without the use of local or global variables.  What is happening is that the notifier is already queued up with a value even before the Elapsed Timer has even started.  So one extra value will be in the output array.  Also, depending on how the parallel loops perform (which goes first), there may be an extra value queued up before the Time Elapsed event could fire.  Typically, when I ran it, I got an array size of 6 at first, and then it went down to 5.  Clearly the architecture is not right for what you want to do.  Instead of depending on timers and parallel loop timing, you could receive all the values queued and just discard the ones you don't want.
    - tbob
    Inventor of the WORM Global

Maybe you are looking for