Is high speed daq done on a compactrio comparable in terms of performance specifications to that done on a pci6071e?

I would like to perform continuous data aquisition, that I currently use a PCI6071E for, on Compact RIO. What are the similarities and differences in terms of DAQ capabilities and performance (resolution, settling times, ground reference issues, max input voltage range, channel gain, pre-/post gain errors, system noise, input impedance etc)?

Any comparison will depend on which CompactRIO module you choose. Currently for analog inputs you only have the 9215 module as a choice though.
The manual for the 6071 and 9215 will give you all the detailed specs and their differences, but here is a rough overview.
6071 manual
64SE/32Diff multiplexed input channels running at a sum sampling rate of ~1.25 MHz, 12 bit resolution, with a variety of gain settings to increase resolution in smaller voltage ranges.
9215 (one module) manual
4 Diff simultaneous iput channels, running at a maximum of 100 kHz (each
channel), with 16-bit resolution across a +-10V range, no gains. You can place up to 8 modules in one CompactRIO chassis to increase the number of channels.
One big difference will be how you process the data. With the 6071, data is transferred to the main processesor using DMA. Using CompactRIO you can use and process the data directly on the FPGA, or you can transfer it back to the RT processor on the CompactRIO controller to process the data in LabVIEW RT. The bandwidth of the data transfer from the FPGA to the RT processor is lower than a DMA transfer from the 6071, so you need to make sure that you can transfer the data fast enough for your application.
Christian L
Christian Loew, CLA
Principal Systems Engineer, National Instruments
Please tip your answer providers with kudos.
Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

Similar Messages

  • Pushing the limits of continuous high-speed DAQ + processing, with PXI-6115 + PXI-8360

    Hi all,
    I'm trying to do continuous high-speed data acquisition + processing. I currently have:
    Chassis: PXI-1042Q
    AI card: PXI-6115 (x2)
    Link: PXI-8360
    PC CPU: Intel Xeon W3503 (dual-core, 2.40 GHz)
    RAM: 4 GB
    The idea is to continuously grab 8 channels' worth of analog inputs, scan for "events", and if found, log the events to disk.
    My client would like to use the maximum sample rate of 10 MHz, but I found that above 5 MHz, DAQmx Read.vi can no longer keep up (e.g. at 6 MHz, it takes 110 ms to read 100 ms' worth of data).
    Im thinking of getting beefier components, but I'm not sure where the bottleneck is. Here are some thoughts:
    1) The MXI Express link
    The PXI-8360 is rated for a sustained throughput of 100 MB/s.
    I'm not sure how big the data is. PXI-6115's ADC is 12-bit. Does that mean each datum transferred through the MXI cable is 1.5 bytes? Is the data padded? Or is it 8 bytes per datum, because Dbls are being transferred?
    2) The CPU
    In Resource Monitor, I notice that the CPU usage is 0% at 60 kHz, 8% at 61 kHz, and 50% at 70 kHz (meaning that one core is maxed out). I'm surprised that it keeps up all the way until 5 MHz though.
    What causes this high CPU usage? Is it due to the conversion of the data into 1D array of waveforms?
    3) Something else?
    Have I missed something completely? Is what I'm describing even possible?
    Thanks in advance for any advice!

    Hello,
    Each sample will take up 2 bytes, 12 bits data + 4 empty bits. so the PXI-PCI 8360 is only able to transfer a maximum of 50S/s in this case.
    Also, because the PXI bus is shared among all of the cards, two PXI 6115 cards will be trying to transfer 80MS/s or 160MB/s.
    Finally, since the PXI 6115 are DAQ family cards, their specifications aren't really written for continuous data acquisition at the upper end of their sample rate, but I'm interested to know if you can get up to 10MS/s with only one card operating at a time?
    -Jim B
    Applications Engineer, National Instruments
    CLD, CTD

  • I have one application that has requirement to do low and high speed acquisition. I want to change sample rate while running. BUT... I have E series Device

    I am writing control software for a process that is usually dull and
    requires only 10 Hz acquisition rate.  At particular times during
    the sequence, however, we are interested in looking at a couple of
    channels at 1000 Hz.  My approach so far is to configure my
    Buffered DAQ to run at the higher rate at all times.  When we are
    in the 'high-speed DAQ' mode, the program logs every point to
    disk.  In the 'low-speed' mode, I am picking off every nth (in
    this case, 10th) point to log to disk.  At all times, I update my
    GUI indicators on the front panel at a maximum of 4 times per second (I
    find that anything faster results in an uncomfortable display), so I
    fill up a FIFO with data in my acquisition / logging loop, and read the
    FIFO in the display loop.  The data in my GUI display can be up to
    250 milliseconds off, but I find this acceptable . As a side note, I
    need buffered Daq with hardware timing, as software timing results in
    lost data at 1000 Hz.
    This all works fine and dandy, but I am convinced that it is not the
    most elegant solution in the world.  Has anyone developed a
    buffered DAQ loop where the scan rate can be adjusted during
    operation?  I would like to change the rate of the E-Series card
    rather than relying on down-sampling as I am now doing. 
    The reason I have concern is that at the moment I am simulating my AI
    using MAX and when running the down-sampling routine, I consistently
    miss a particular event on the simulated data becuase the event in
    question on the simulated data always occurs at the same 'time', and I
    always miss it.  Granted, while it is unlikely that my measured
    signal and my acquisition are perfectly synchronized in the real world,
    this particular situation points out the weakness in my approach.
    More than anything, I am looking for ideas from the community to see
    how other people have solved similar problems, and to have you guys
    either tear apart my approach or tell me it is 'ok'.  What do you
    think?
    Wes Ramm, Cyth UK
    CLD, CPLI

    Adding to Alan's answer:
    One of the problems that comes with these tricks for variable-rate acquisition is being able to match up sample data with the time that it was sampled. 
    If you weren't using either of E-series board's counters, there is a nifty solution to this!  You'll be using 1 of the counters to generate the variable-rate sampling clock.  You can then use the 2nd counter to perform a buffered period measurement on the output of the 1st counter.  This gives you a hw-timed measurement of every sampling interval.  You would need to keep track of a cumulative sum of these periods to generate a hw-accurate timestamp value for each sample.
    Note:  the very first buffered period measurement is the time from starting the 2nd counter until the first active edge from the 1st.  For your app, you should ignore it.
    -Kevin P.

  • Airport Express 802.11n and high speeds *impossible*

    Hi there,
    well I have been playing around for the last 24-48hrs with an AirportExpress 802.11n wiht the latest firmware on it, to achieve high speed wifi ~300mbps as 802.11n allows. The following hardware was used during this tests:
    Airport Express Model: A1264 Firmware: 7.4.2
    1) Macbook Pro AirPort Extreme (0x14E4, 0x88)|Broadcom BCM43xx 1.0 (5.10.91.26)
    and an
    2) 24" iMac - AirPort Extreme (0x14E4, 0x8E)|Broadcom BCM43xx 1.0 (5.10.91.26)
    Goal: Achieve to establish a high speed wireless-n infrastructure network.
    Comparing to: Ad Hoc Network Wireless-N iMac to Macbook Pro
    Basic Ad Hoc Network Setup:
    Macbook Pro:
    PHY Mode: 802.11n
    Channel: 11 (2.4Ghz)
    Security: none
    RSSI: -54
    Transmit Rate: 130
    MCS Index: 15
    iMac 9.1:
    Channel: 11 (2.4Ghz)
    Security: none
    RSSI: -48
    Transmit Rate: 1117
    MCS Index: 14
    Testing: copy 1GB file via Finder Desktop to Desktop
    Reault: achieved an avarage transfer-rate: 9.7MB/s
    Infrastructure Tests:
    1st Setup: identical setup with airport express
    Configuration: Airport Express
    Radio Mode: 802.11n only (2.4 Ghz)
    Channel: 11
    Security: none
    Macbook Pro:
    PHY Mode: 802.11n
    Channel: 11 (2.4Ghz)
    Security: none
    RSSI: -36
    Transmit Rate: 130
    MCS Index: 15
    iMac 9.1:
    Channel: 11 (2.4Ghz)
    Security: none
    RSSI: -38
    Transmit Rate: 130
    MCS Index: 15
    Testing: copy 1GB file via Finder Desktop to Desktop
    Reault: achieved an avarage transfer-rate: 3.2MB/s
    2nd Setup: airport express
    Configuration: Airport Express
    Radio Mode: 802.11n only (2.4 Ghz)
    Channel: 11
    Security: WPA2
    Macbook Pro:
    PHY Mode: 802.11n
    Channel: 11 (2.4Ghz)
    Security: WPA2 Personal
    RSSI: -34
    Transmit Rate: 130
    MCS Index: 15
    iMac 9.1:
    Channel: 11 (2.4Ghz)
    Security: WPA2 Personal
    RSSI: -37
    Transmit Rate: 145
    MCS Index: 15
    Testing: copy 1GB file via Finder Desktop to Desktop
    Reault: achieved an avarage transfer-rate: 3.0MB/s
    3rd Setup: airport express
    Configuration: Airport Express
    Radio Mode: 802.11n (802.11b/g compatible)
    Channel: 11
    Security: WPA2
    Macbook Pro:
    PHY Mode: 802.11n
    Channel: 11 (2.4Ghz)
    Security: WPA2 Personal
    RSSI: -33
    Transmit Rate: 130
    MCS Index: 15
    iMac 9.1:
    Channel: 11 (2.4Ghz)
    Security: WPA2 Personal
    RSSI: -39
    Transmit Rate: 145
    MCS Index: 15
    Testing: copy 1GB file via Finder Desktop to Desktop
    Reault: achieved an avarage transfer-rate: 2.9MB/s
    4th Setup: airport express
    Configuration: Airport Express
    Radio Mode: 802.11n (802.11a compatible)
    Channel: Automatic
    Security: WPA2
    Macbook Pro:
    PHY Mode: 802.11n
    Channel: 36 (5Ghz)
    Security: WPA2 Personal
    RSSI: -48
    Transmit Rate: 270
    MCS Index: 15
    iMac 9.1:
    Channel: 36 (5Ghz)
    Security: WPA2 Personal
    RSSI: -46
    Transmit Rate: 300
    MCS Index: 15
    Testing: copy 1GB file via Finder Desktop to Desktop
    Reault: achieved an avarage transfer-rate: 5.5MB/s
    Enabling/ disabling wide channels doesnt change the avarage transfer rate significantly.
    These are just a few examples of all tests I have done, I have tried every possible configuration and took screen shots of almost 80% of the test (if someone is interested in reviewing this).
    I couldnt find a single configuration in infrastructure mode that is as fast as the ad-hoc connection, so I assume either apple's setup for ad hoc and using channels is different to infrastructure or the airport express simply is unable to perform the same way as the wifi cards being used inside my Macs (which is my conclusion unless someone proves me wrong).
    -------------------------------------------------------------------- Keep in mind, my findings show a 100% faster network in Ad hoc mode on a 2.4 Ghz connection
    This is quite unsatisfying as I have bought the airport express on Feb 9th 2010 and it was the latest available model according to local genius bar.
    So to quote apple: "The AirPort Express Base Station is based on an IEEE 802.11n draft specification and is compatible with IEEE 802.11a, IEEE 802.11b, and IEEE 802.11g." yes it might be compatible but doesnt allow you to fully operate at those specifications or has anyone been able to achieve this somehow?

    why didnt I find this any earlier ...
    http://www.applesource.com.au/mac-accessories/soa/Apple-AirPort-Express-Base-Sta tion-802-11n-/0,2000451112,339287629,00.htm

  • Macbook running slow.  Fan runs at high speed at times.

    I have a late 2008 white Macbook and after installing an upgrade to OS X 10.6.6 it has been running slow in many applications, and the fan seems to run at high speed at times. Not sure if the change in performance is related to the upgrade, or just coincidental. I currently have 2GB of memory installed.

    Do this reset perhaps twice:
    SMC RESET
    http://support.apple.com/kb/HT3964
    Shut down the computer.
    Unplug the computer's power cord and ALL peripherals.
    Wait 15 seconds.
    Attach the computers power cable.
    Wait another 5 seconds and press the power button to turn on the computer.
    The 5 second timing is important to a successful reset.

  • DAQ Assist Continuos High Speed Display

    I am using the DAQ Assist to read in voltage data and display it as a scrolling chart. When I set it to take "N Samples" it works well, but wont display that data until the N samples have been collected. When I switch it to a continuos mode and set it to take 1 sample buffer at the same rate (20kHz) it actually samples much slower. Any suggestions about how to get a continuous chart that still shows the the data at full speed?
    Thanks

    The DAQ assistant, while it is great for getting a quick and dirty DAQ task up and running, isn't really designed for high speed data acquisition with precise control.  You will likely need to use the lower level DAQmx functions.  Start a task, acquire chunks of data in a loop, end the task.  You may need to pass the data off to a parallel loop for data processing, display, or logging using a producer/consumer design architecture.  Take a look at the DAQmx examples in the example finder.
    If you right click on your DAQ assistant and select Open Front Panel, it will turn the Express VI into a subVI that you can open and see how the underlying DAQmx code works.
    It may even be a matter of selecting continuous samples in your DAQ assistant, a small number of samples to read, have it in a loop that iterates enough times until you have collected your larger number of samples.

  • Why does error 200279 occur at high speeds only?

    I am using a VI very much like the one attached here, and as my motor speeds up and the period value decreases, the vi fails, and error 200279 shows up as displayed in the attached 2 jpg images. This VI is reading the period value of an encoder by rising edges.  The error does not show up at low speeds, only high speeds.  Hardware is wired through USB ports on PC.  Using Labview 2012 and Windows 7.
    Do I need to specify the samples per channel for the READ in the case structure to eliminate this error?  The error only occurs when the period gets quite short, e.g. 9ms or so.  At higher period values (slower motor speeds) the error does not appear.  I am using the counters built into the cDAQ9174 chassis  and the NI 9401 module to read the period values of my encoder.  What is happening at high speeds to cause this error?  I thought that the setting on the DAQ Timing VI required that 16 periods are read every iteration, so, why is it saying that it is trying to read samples that are no longer available?
    Also, is the "Append Array" building up a large array that is being carried in the SHIFT REGISTER and causing things to slow down?  There are a huge number of periods occuring with an encoder at 120 ticks/revolution.  should I try to keep this array truncated or something to reduce the size of the data being handled for each iteration?  Can this large array be causing the 200279 error?
    Thanks,
    Dave
    Attachments:
    forum JUly 18.vi ‏26 KB
    error July 18.jpg ‏45 KB
    error July 18 part 2.jpg ‏49 KB

    I think there's a combination of things that could be contributing.  I don't have time for a full explanation right now, here's some quick mods I did to the code you posted.  Essential changes are:
    - made separate loop for collecting data into a big array.  (Maybe you can consider dumping to file instead of growing an array in memory?)
    - used a queue to transfer data between loops
    - increased the buffer size dramatically while still calculating average of only the most recent periods
    - reduced the acquisition loop rate -- expect to retrieve more data points per iteration
    There are a couple other things I'd probably add or change with more time, but this minimal set of mods should help some.
    -Kevin P

  • Help! data manipulation for high speed streaming to disk from multiple boards and multiple channels

    I am using Labview 7.1 and have been trying to capture data from 12 channels simutaneously sampled at 2MS/s each and streaming to disk for up to a minute or more.  The hardware I am using is 2 x PXI 6133 S series boards with a MXI4 link to a Pentium D 2.8 Ghz machine with 2Gb ram.   I have 2 sata drives set up in a raid 0 configuration which should give me hard disk write speed faster or equal to the MXI-4 transfer speed. 
    I have first started off by using the example code "multi device sync - analog input- cont acquisition" which has enabled me to sync the two boards and sample at the required speed. 
    To stream the data to disk, I have first merged the data from each board  together to save it to one file.  I have tried using the storage vi's but I end up with a Daqmx read error (trying to read data that is no longer available).  I have played around with the read data size to the point that I either get a insufficient memory error, or I get the "trying to read data that is no longer available"  error.  I have also tried using the file IO blocks with some success and have found that I have been able to stream to disk only if I configure the daqmx read block to output the data in "raw 1D I16" format and plugging it into the file-write block.  In doing this, I have noticed that using  multiple channels on one daqmx read task, I will get all the channels in one 1D array rather than a 2D array organized by channels.  This makes it messy to read at the end of this, and I also don't want to write another vi to separate the channels, due to the high chance of getting the data mixed or messed up if I happen to change the number of channels on a board
    Is there a cleaner way of streaming this data to disk and keeping the channel data separated from each other?, and/or is there a better way to capture and handle the data I need? 
    I have attached the vi which I have got to consistantly work streaming to disk using the raw 1D I16 format.
    Thanks in advance to anyone who can help.
    Attachments:
    multidevicesync_analoginput_streamtodisk.vi ‏197 KB

    Hi,
    i can suggest following
    Refer to an example VI called as "High speed data logger.VI"  in conjunction with "High Speed data logger reader.vi" in Labview examples. Alhrough the logger might be in Tradiditional Daq format, it can be quite easily converted to Daq Mx format to store data in Binary (I32 format) . I have used this for many of my applications and i have found that the data retrieved does not have any "messups".
    Why not keep a seperate file for each card? This way, you do not have to load your application with extra process. You only have to acquire and save. After saving in Binbary format, you can retrive it offline, convert it to ascii format and merge the data files of various cards to get one consolidated ascii data file.
    hope this helps
    Regards
    Dev

  • Urgent problem! please help. high speed digitizer, channel switch time too long!

    Dear all NI high speed digitizer experts:
     I post a question concerning the two-channel configuration using NI5154 digitizer (see Need help to configure a two-channel acquisition using NI5154 ).
     As we need to do some measurement using NI5154 very soon so purchase a DAQ board as suggested by Efrain is not a option for our coming experiment. So I try to configure the NI5154 a two channel acquisition. I configure the NI5154 to count pulse in two channels. Our experimental setup will send pulse to channel 0 for 400 ms and then stop. 100 ms later pulses from other source will be send to channel 1 for 2 s.  I thought the 100ms dead time in our setup would be long enough for the digitizer to switch from channel 0 to channel 1. But after some test I found the digitizer takes more time to switch between channels. 
    I made a test vi (NISCOPE-Timing.vi) just for count how many ms it takes for the digitizer to switch between channels. In the attached vi, if you run for only one channel one loop takes about 20 ms in my pc. If you run for two channels it takes about 130 ms for one loop. If you just run one channel twice the loop time is about 40 ms (I mean stop a channel and then restart the that channel).
    I don't understand why it takes so long to switch from channel 0 to channel 1.  As I tested the niScope Commit.vi consumes a lot of time for the second channel. Is there any way to avoid this? We can not extend the 100ms long dead time of out set up so I must get rid of this problem. 
    Solved!
    Go to Solution.
    Attachments:
    NiSCOPE-Timing.vi ‏34 KB

    Hi Lixin,
    There are a couple of different options that you may try. The first, which it sounds like you may not prefer, is to use the TRIG line on the 5154 and somehow find a way to route both sets of pulses to that line. You can either somehow connect both lines to the one input or use some sort of external switch since the signals will not come in at the same time.
    Unfortunately, what you re seeing in terms of the time it takes the board to reconfigure itself for a different trigger channel and re-inititiate is due to the settling time that is necessary for the board to be able to fully reach its specifications. The majority of settling usually occurs pretty quickly, but the board will wait for some time to get the best possible performance in terms of specs. If you are okay with reducing this settling time (and very slightly diminish the specified performance), then you can use an internal scope property to set the max settling time.
    I have attached a .rc file which must be placed in the LabVIEW directory for niScope to enable use of this property node. Please place the file in your ...\Program Files\National Instruments\<LabVIEW 2009>\instr.lib\niScope directory. Once the file is in that directory, restart LabVIEW, and you should be able to see a new category in the niScope Property Node tree titled "Internal". Under that category, you will have the Max Settling Time property, which gives the driver a maximum amount of time (in seconds) to wait for settling before beginning a new acquisition. Add this new property to your first property node at the beginning of your program. I tested this out with a value of 50 ms and found that my initiate went from ~125 ms to ~53 ms or so after reconfiguring the trigger channel and re-initiating.
    Hope this helps!
    Daniel S.
    National Instruments
    Attachments:
    niScopeMaxSettling.zip ‏1 KB

  • High speed aquisition

    Now I'm going to develop a device driver in win2K for LV system,which can control a PCI Mutilfunction board(AD,DA,DIO,C/T).We  already have developed a driver in win DDK,and it works good. 
    It's seems that there are many way roading to the aim,but what I should choose?
    Now I have the follow thoughts:
    1.Using VISA
    2.Using C++ to call the APIs in the windows drivers.And then using VIs to call the APIs gernerated in C++.Maybe I should call the APIs directly from the windows drivers,but how can I use a pointer and receive datas that posted by the drivers.In this way ,the aquisition datas was copied for so many times and obvios not good for performce.
    The driver should work in a high speed condition(eg.16bitsX1M),especialy in datas delivering.And then ,we should think about  the developing time(TTM).
    Who will give me some advices about these?Or another better way?
    Thanks a lot . 

    The waveform carries a DT (delta Time) component.
    Use the GET WAVEFORM COMPONENTS function to extract the DT. That should be the actual dT used, not necessarily the rate you asked for.
    If you have a DT number, you create the array with a FOR loop:
    For I = 0 to NSamples-1
    Time[i] = i * DT
    As far as the inter-channel time, the minimum value is a property of the card, but it's also controlled by software: LV can extend it.
    I don't know about the express VIs: I never use them. But the original AI CLOCK CONFIG, has an input where you can specify it.
    If that doesn't work, and you have access to some test equipment, you can find it out for yourself:
    Set up a ramp generator for 0.0 - 1.0 V at 1000 Hz.
    This means the voltage is
    changing 1 volt per mSec.
    This means the voltage is changing 1 mV per uSec.
    Wire the ramp generator to channels 0, and 10.
    Perform a DAQ operation, with all channels 0-10 active.
    Pick out a portion of the waveform, where the voltage is increasing on all channels (exclude the drop at cycle's end). Ignore channels 1-9.
    Subtract the channel 10 array from the channel 0 array.
    Average the result array and divide the average by 10.
    The average value, in mV, is the channel delay time, in uSec.
    By comparing channels 0 and 10, you're measuring 10 intervals, not one, so you're more accurate.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • How to use PCI-6534 High speed DIO to count the no.of pulses aquired

    HI All
    I have PCI-6534 high speed DIO card. My requirement is to count the no.of pulses coming. Here i have an energy meter which generates pulses with frequency of around 8MHz. i need to cunt the no. of pulses coming in, here i am attaching the VI i am using. I could not really count all pulses coming in. right now i am using single line, but the requirement is to develop for 7 lines. I do not know where i am going wrong. Can any of you help me in this regards.
    Thanks
    Anil Punnam
    Attachments:
    Read Dig Chan-Change Detection_stop.vi ‏120 KB

    Sorry, not near a LV PC so can't look at your vi now.  Are you limited to using only the 6534?  If all you need to know is the count of pulses from each of the 7 ~8MHz sources, it seems like the amount of data storage required with a 6534 is terribly inefficient.  Since the 7 sources are unlikely to be synchronized in any way and they are each at ~8MHz, you're looking at about 100+ million transitions per second with change detection.  I don't think the hw can keep up with that.  Even using a constant sampling rate of 20 MHz (which just barely satisfies the Nyquist minimum of 2x 8MHz), it's questionable whether you can keep up with that rate for several minutes.  Even supposing the hw and your PCI bus and software can keep up, there's still a TON of processing to do.  20 MB/sec for 20 minutes = 24 GB! 
    On the other hand, consider the 6602 counter timer board.  Here you would simply set up 7 edge counting tasks, probably without any buffering at all.  At any leisurely pace you want, you can software query the counts of the # of pulses on each of the 7 channels and have an instant answer.  The only issue to deal with is that the counts will rollover when you reach 2**32.  At 8 MHz, this will happen about every 9 minutes.  However, DAQmx provides a nice way to handle this.  There's a property you can query that will tell you if a rollover has occurred.  It automatically resets itself to False after you read it so it's ready to detect the next rollover 9 minutes later.  See my first post in this thread for example.  (Last I knew, only DAQmx does the automatic reset, not traditional NI-DAQ).
    If you can possibly buy a 6602, I'd highly recommend it.
    -Kevin P.

  • High speed

    Have been a customer for 20 yrs. Please give me High speed. every neighbor has it but 3 of us. I ask you what is the problem . I would love to bundle Direc Tv Verizon & Wireless. It is always impossible. My children need it for High School I need It for my business.(will go elsewhere to complete work if possible) This is a total injustice for tax paying customers. You really need to keep upgrading your systems. Or we will go elswhere. Last chance
    Dawn Smith
    {edited for privacy} 
    Message Edited by KaLin on 02-13-2009 09:58 PM

    > I have a LabView program that takes data from a laser doppler
    > velocimeter counter. The program waits for a rising edge, then sets an
    > output low, then reads sixteen bits of data, then sets the same output pin
    > high, and then waits for more data. According to the oscilloscope, this
    > is taking about twenty milliseconds. Does LabView automatically go DMA,
    > or will it try to temporarily store it on the hard drive? The data isn't
    > processed until all of it has been taken, so all the pertinent part of the
    > program is doing is reading sixteen bits of data and hanging onto them
    > until all the data has been taken.
    > Frankly, 20 ms read time (50 Hz!) is just WAAAAAAAY too slow for
    > the flows being analyzed. I am still new to Labview, and I on't kno
    w if
    > my program is storing things in ram or writing to the HD. Whatever it's
    > doing, it's too slow. THe computer is an older P120, but it seems like it
    > should be faster than this. Any input?
    >
    LabVIEW does what the diagram tells it to. If you are calling a single point
    Analog read function in a loop, it is software timed and the overhead
    for this
    type of acquisition is pretty high. If you configure the card to trigger
    when it sees the edge and tell it how many post-trigger points to return,
    then the DAQ will be hardware timed and is limited by the clock on the
    board and the configurability of its various counters and ADCs. Look at
    some examples that do triggering and HW timing.
    LV doesn't write anything to disk unless you tell it to with a write icon.
    The values you collect can be collected in an array and written to disk when
    there is computer time to do so. If you don't have much memory in your
    computer, the OS may be using virtual memory without LV even knowing
    about
    it.
    Greg McKaskle

  • How can a session be setup between two MacBook Pro's at different locations on high speed Internet connections, be setup to have a live recording session in Logic?

    My partner and I live on opposite sides of the valley here in Phoenix. About 50 miles apart one way. We are Musicians and are working on a commercial for television, and would like to know if there is a way, between FaceTime and Logic, that we can configure our MacBook Pro's to record sessions together without having to be at the same house? When we FaceTime, he can hear my keyboard through his headphones, but we can't seem to get the signal routed into Logic and register to a track when its record enabled. Is there a way for us to save gas, and record at seperate locations, so he can engineer me and I can engineer him, while we build this project in Logic and Final Cut? Here is what we are using, we have nearly identicle setups, which was planned so we might be able to do something like this. Both of us are using Mid 2012 15" MacBook Pro's with 2.6GHz Intel i7 processors and 16GB RAM. The only difference between the two MacBooks is that his has an Optical Drive, but I removed mine and replaced it with a 2nd SSD Drive, so I have two Samsung 840 Pro's each with 512GB in my MacBook Pro. His just has one Samsung 840 Pro with 512GB. We both use Focusrite Saffire Liquid 56 Firewire Audio Interfaces, and we both use Logic Studio 9 Academic. I don't know if you need to know what we are using in order to help us figure out how to configure the MacBooks for a live session, so I figured I would give you those details just in case. If you need any more information, just let me know? Oh, we both have high speed cable Internet connections. Mine is 150Mbps speed which is the absolute fastest I can get in my area. And his is only a 50Mbps speed line.
    Again, what we are looking to do is have a LIVE recording session, in Logic, with him at his house and me at my house. When we FaceTime each other, we can pretty much hear each others keyboards as if they were part of the conversation, and as crystal clear as we can hear our own when recording, but we can't seem to record enable an Audio track in Logic and route that keyboard signal into Logic and record it so we can engineer each other without either one of us having to drive to the others house, to be there to do it. If any of this even makes sense, I'm not quite sure how to better explain it. So hopefully you get what I mean, otherwise just say so, and I will try to do a better job explaining it.  :-)
    Anyone who can help us with this little problem would be greatly appreciated, and quite a hero in these two humble Musicians Worlds!! So from both of us, THANK YOU in advance for any input you might be able to give us. I look forward to hearing any and all suggestions.
    Scott

    Hey Scott
    You and ALOT of people are attempting to do this, myself included.
    Unfortunately at this point (to my knowledge) it's a pretty CRAP setup involved that will likely leave you frustrated.
    HOWEVER:
    A few things CAN be done:
    IF you are getting live audio from your partner over the internet, you can re-direct that audio directly into Logic using a program called SoundFlower.
    There are a few alternatives to SoundFlower, like JackTrip, and.....(the name eludes me)
    (SoundFlower is from a company called Cyling 74 who also makes Max MSP. )
    Once in Logic, you set your audio inputs to the track to be Soundflower. (either 16 inputs, or in this case 2)
    That all being said, regardless to speed, you are still going to experience latency in the connection, and THAT is the part that will likely drive you to the mad house.
    THIS all being said, I believe there are websites out there that are set up to actually do this kind of thing, (at least there WERE) but likely membership costs, and other "hidden" problems that you only learn about after you have signed on the dotted line. (Hopefully I am wrong?)
    If you don't need to do a "live jam", there are always solutions like DropBox that work very well for exchanging large files quickly.
    If you want to record live MIDI input from his keyboard, I am unaware of any solutions for this.
    Due to the popularity of the iPad, you might want to look into the App store, and see if there is something available.
    Good luck!
    Treatment

  • How do I use the High Speed Data Logger with multiple I/O devices?

    I am using the High Speed Data Logger vi to read from a 16 channel A/D card (NI PCI-MIO-16E). The project may require more than 16 channels. How can I use High Speed Data Logger to read from two A/D cards? Will it be able to write the data to one file?

    The High Speed Data Logger vi will not acquire and right to multiple DAQ boards at the same time without modification. LabVIEW is more than capable of doing this what you are trying to do, but you will have to modify the code.
    Regards,
    Anuj D.

  • I just set up an Optus Cisco DPQ3925 wireless router to access higher speed internet I signed up for. I have a 4th gen airport extreme I want to put in another room and use as a wifi extender the wifi but I get an error message each time I try. Help?

    Hello all.
    I have just set up a new cisco DPQ3925 wireless router that Optus sent me to be able to access the higher speed internet I have signed up for.
    I have a 4th gen apple extreme that I want to use to extend the wifi but when I try to update the settings via the airport utility I get a message that says it cannot do so, and to check it is in range and the wifi is set up correctly. I'm not experienced with these things but I can't think what I have done wrong.
    Is anybody able to help me please.

    You cannot use the AE to extend wireless from a non apple router such as your cisco modem router.. they are not compatible..
    You need to tie to the two devices together either with ethernet or something like EOP adapters.. They are about $120 and you can price match in officeworks.

Maybe you are looking for

  • Macbook 13.3 white, the screen goes off and only comes on when screen moved

    Hi. I have the macbook 13.3 in white, early 2009 model. The screen keeps going off and the only way I can get it back on is to move the screen back and forth. By the looks of things this is a common problem and it's usually related to a cable between

  • Don't die after your billing cycle ends cause they will still charge for the following month

    My husband passed away on Thanksgiving day but becaude my billing cycle ended on the 20th, Verizon is charging me $30 for the next month even tho I cancelled his phone.  thanks for the compassion

  • *** Settings for importing down-converted HDV footage

    Working with a Sony Z1U with HDV footage. Have printed HDV footage to tape and now want to import it as SD footage in order to go to DVD. In the settings on the Z1U, do I choose the "squeeze" or "letterbox" option? I think squeeze, but I want to doub

  • HT1725 INCOMPLETE SONGS

    I down loaded an album and 3 of the songs are incomplete the music stops before the end of the song.  The download was initially interupted (the three songs were impacted) but appeared to finish properly.    How can I get the full songs?

  • Segment builder logic

    Hello all, I need some information of segment builder and hope you can assist me: if I combine different attributes with AND / OR does the segment builder set brackets? So if I choose attribute A or C and drag it to the profile and afterwards I combi