AI-16XE-50 replaced by DAQCard-6036E

Hello,
I have an application that was programmed long ago by an outside vendor.  It used AI-16XE-50, which has now been rendered obsolete.  The replacement provided by NI was DAQCard-6036E.  We purchased this card and installed it.  It fired up without even a hiccup and seemed to run just fine.  After looking more closely at it, it seems that the sample rate is not being set correctly.  Are the sample rates on these cards set the same way?  If not, what are the differences?  BTW, I'm using LabVIEW 7.0.
Thanks.

Hi rickford66 -
I'm sorry to say that the Traditional DAQ driver is no longer fully supported, as it was replaced by the DAQmx driver years ago.  I'll try to help out as much as possible, but my experience with the older driver is pretty limited.  Here's a shot in the dark, in case it helps:
It sounds like your application is starting the card at some specified sampling rate and then running a loop to read from the buffer, based on the system timer.  When the timer says that time is up, it stops the loop.  What should actually be done is to set the card up for a finite acquisition of the specified duration, then to read from the buffer periodically inside the loop (while monitoring the available samples per channel).  When the available samples drop to zero, it means the clock on the HW has stopped and you have all the samples. 
You might be running into performance issues in getting data across the PCMCIA bus (via interrupts) and just not reading the last batch of data since your feedback on when to stop the loop is completely independent of the DAQ card's operation.  If you don't want to change anything else, you might just break the loop on the timer and call AI Read once more with the sample to read set to "all available" (or the equivalent).  This should flush the end of the buffer.
David Staab, CLA
Staff Systems Engineer
National Instruments

Similar Messages

  • Wrong results using quadrature encoders with NI DAQCard-6036E

         Hello,
    I'm experiencing some troubles using two quadrature encoders with a NI multifunction I/O.
    The encoders are Micro-Epsilon WDS-7500-P115-CR-TTL. They are incremental encoders in TTL logic. They are connected to a HP laptop running Windows XP Professional. The connection is via the multifunction I/O NI DAQCard-6036E. Each encoder is connected to the DAQ board with four wires: +5V, DGND, track A, track B. I used the system in my office for a while and everything was fine. Then I moved it in another place and now it shows a fuzzy behaviour.
    I made the following tests:
    Test 1) I connect track A&B to analog inputs on the DAQ card. Then I use SignalExpress v2.5 to perform a DAQmx analog input acquisition. The waveforms I get are exactly as expected.
    Test 2) I connect track A to the counter source and I leave track B disconnected. I use SignalExpress v2.5 to set a DAQmx edge counter, with the "Count up" option enabled. Also this test is fine. When I pull the encoder cable I get +N counts and when I release the cable it goes back to zero position, giving other +N counts.
    Test 3) I connect track A to the counter source and track B to P0.6 (or P0.7 for the second encoder), which is the pin used to control the count direction. I use SignalExpress v2.5 to set a DAQmx edge counter, with "Count up". In this way the DAQ should ignore track B and count always up. Actually it does, but the count rate in one direction is double with respect to the count rate in the other direction. This means that when I pull the encoder cable I get +N counts and when I put it back to initial position I get other +2N counts. In this way the counter indicates +3N at the end, while it should be +2N.
    Test 4) I connect track A to the counter source and track B to P0.6 (or P0.7 for the second encoder). I use SignalExpress v2.5 to set an "Externally controlled" DAQmx edge counter. Now I get +N counts when I pull the encoder and -2N counts when I put it back to zero position. In this way the counter indicates -N  at the end, while it should be zero.
    Test 5) I repeat test 4 using LabWindows/CVI v8.1 and I get the same result.
    Test 6) I swap lines A&B. Now track B is connected to the counter source and track A goes to P0.6 (or P0.7 for the second encoder). Using SignalExpress to perform an "External controlled" count, I get +2N counts when I pull the encoder and -N counts when I put it back to zero. So, at the end the counter indicates +N, but it should be zero.
    Do you have any idea on how to solve the problem? Thank you very much in advance.

    A few things:
    1. I'm not from NI and won't try to speak for them.  But I don't believe these forums are meant as a primary means of support, probably not an *official* means of support at all.  Most of the folks here (like me) are NI's more-or-less satisfied customers, not employees.   If you buy a service contract, you can get instant phone support.  If you rely on free support from the forums, I think you'll get good help most of the time, but there's just no guarantee. 
    2. "I just got a [email] reply from MicroEpsilon.  The encoders work fine."   Um.  Based on what, exactly?  Of *course* they will expect their own stuff to be just fine, and in fact I very much suspect they're right.  But NI will expect their board to be just fine, and I expect they're right too.  Or at least it was fine *before* you hooked things up.  Leading us to #3.
    3.  Part of the app note on Quad Encoders on E-series boards warns against connecting differential encoder outputs directly to your board.  I think it mentions that a 24V differential (for example) can damage the board.  But even a low-voltage differential signal isn't electrically *compatible* with your counter inputs.  Your first posting claimed that the encoders produced TTL.  Your June 30 post referred to inverted A and B signals for rejecting common mode noise over long transmission lines.  These are classic code words that scream "differential output", *not* TTL.  So now we can start addressing some specific tech issues.
    4. Your E-series board is not inherently capable of handling true quadrature, as the app note says.  (The newer M-series multifunction boards *do* have the capability.)  You can get kinda sorta close, but you'll be at risk of count errors due to direction changes or during vibrations when otherwise stationary.
    5. You will also need some type of differential to TTL conversion on your (A, /A) and (B, /B) pairs.
    6. You will need a common "ground" reference for all your digital signals (probably not a true earth ground).  So the ground for your conversion circuit and its TTL outputs must be tied to your DAQ board digital ground.  Also the return terminal from any related external power supply.  Sounds like failure to do this had been an issue with a past implementation of yours so perhaps it's an additional factor at play this time too?
    7.  What are you trying to measure?  For what purpose?  What decision is made from the data?  How much do you care about its accuracy?  These are leading questions, but I'm suggesting that meeting schedule with an unreliable app that produces untrustworthy data just might not be the best goal to strive for right now.  If you care to maintain accurate position count despite direction changes or vibrations, you *need* something more than your E-series board.  If you want reliable edge counting operation with *any* DAQ board, you *need* electrically compatible signals.
    -Kevin P.

  • Bad readings from DAQCard-6036E after DAQmxSelfCal() call

    Hi,
    I have a very urgent problem that I would like investigating.
    I have a DAQCard-6036E and have noticed that, occassionally, after a call to DAQmxSelfCal(), the AI readings are incorrect.
    All channels in my application are configured in differential mode, and the readings returned are occassionally wrong by ~9mV on channels with gain of 1, and ~0.9mV on channels with a gain of 10.
    If I repeatedly call DAQmxSelfCal(), I seem to get the bad readings after about every 3rd call to DAQmxSelfCal() - then the next call will usually correct the calibration coeffns, and so on. I have not been able to establish an exact pattern to the problem, but have tried with the laptop on mains power, and internal batter
    ies and the problem is the same.
    Could there be a problem with the self calibration function in DAQmx, or could there be an external influence? When I call DAQmxSelfCal() my signals are connected to the card, but I assume the inputs get isolated from connected signals during the calibration routine.
    Thanks

    Hi Ed,
    I got a loan DAQCard-6036E from NI and tried that and had the same problems. I am suspicious that it is something to do with my laptop, maybe the battery monitoring circuitry or something. Assuming that there is nothing wrong with the DAQCard 6036 internal design, my theory is that if the laptop generates noise while the self-cal is in progress then this can affect the calibration coeffns that are generated. This might be a rubbish theory, but believe me, the effect is genuine.
    I would say proceed with caution, put the self cal in, but test it thoroughly by calling it repeatedly with a high precision voltage reference attached and check the returned values. Note that I only think this is an issue with DAQCards, I have not seen it with PCI cards, but then I have not tested these as much.
    By the way, doing a self cal once per day is more than enough. Once per hour is too much. There is an article somewhere on NI.com that talks about the recommended interval. Make sure the laptop has been on for at least 15 mins first.
    Let me know how you get on.
    Regards
    Jamie

  • NI-DAQ GPCTR_Set_Application fails on ND_SIMPLE_TIME_MSR on DAQCard 6036E

    I try to use the 24 bit counter/timer on my DAQCard 6036E to do simple time measurements. I used the example file TIOGPStimeNsoftwareEvents.c and adapted it to do simple time measurements without synchronization.
    At line
    GPCTR_Set_Application (1, ND_COUNTER_0, ND_SIMPLE_TIME_MSR);
    , the application yields a -10120 gpctrBadApplicationError error.
    I also get errors on the subsequent GPCTR_Change_Parameter function calls.
    I really need help on this on!
    I use NI-DAQ 6.9.3 with patch.

    I've found the answer to my questions at:
    "Counting the number of quadrature encoded digital pulses using a general purpose counter."
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=50650000000800000052600000&UCATEGORY_0=_32_%24_12_&UCATEGORY_S=0&USEARCHCONTEXT_TIER_0=0&USEARCHCONTEXT_TIER_S=0&USEARCHCONTEXT_QUESTION_0=-10120&USEARCHCONTEXT_QUESTION_S=0
    "C Program Measuring Time that Requires a DAQ-STC."
    http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=B45EACE3E8F556A4E034080020E74861&
    pvdm

  • Daqcard 6036e affected by laptop power supply?

    Hello NI types;
    I have a problem that seems pretty bizarre to me, so I thought I would share it with you to see if anyone has any suggestions.
    I have a brand new Daqcard-6036E (pcmcia card), with shielded cable and the CB-68LPR terminal block, and I'm trying to use it with a Sony Vaio laptop (PCG-955A) with Windows XP professional. Labview 7, full installation with all the recommended drivers from the Labview 7 cdroms (didn't install anything from the 6036E cdrom, since I figured the stuff on the LV7 disc was probably more current anyway).
    The card is recognized by MAX, Labview, etc. I can bring up the test panels no problem. I measure in differential mode, -10v to 10v range, I'm using the proper terminal pairs for the differential mode (aich0 + aich8 for diff1, etc). Problem is, regardless of what I put in for an input (I've used a little dc voltage supply, and bridging the terminals with a wire to enforce 0V) I get garbage not even resembling the proper input. It does random fluctuations all the way up to the input limits (-10/10 V). I measure with a meter and everything looks great at the terminals. I don't have any other terminal block connections (no connections from AIGND's, AO, timers, etc).
    Here's the weirdness: the setup works just perfect... as long as I'm running the laptop off the battery and not using the power supply plugged into the wall. This was really, really annoying to try and find (pulling out the power supply to the laptop was the last thing I tried after fussing with it for a couple of hours). Battery only: reads correct voltages, nice and stable. Laptop power supply: reads completely incorrect fluctuating voltages.
    I checked that I was trying to read differential mode (in MAX, and my labview stuff I've written), and I have no idea what to try next. I've used NI stuff for years (LV2,4,5, Nubus cards on macs, pci cards on pcs and macs) but I've never had an opportunity to use a pcmcia card before. I've never seen this behaviour on any other setup I've worked on.
    I have a few suspicions: laptop power brick (which plugs in on the same side of the laptop as the pcmcia cage) isn't really nice 19.6 V dc @ 3A like the back of the brick says (have not verified this), maybe bizarre ground loop problems (I get this problem even with no external sources hooked to the CB-68LPR, just the differential inputs connected together on the channel). Maybe the power supply brick makes rf interference that annoys the card (I tried moving the brick farther away with no difference). Maybe Sony just doesn't like me.
    As a dumb test, I ran the laptop's power brick plugged into a 120v inverter running off a 12V car battery (ie - 12Vdc -> 120Vac -> 19.6 V dc -> laptop) and that made no difference (still didn't wdfdfork). My other thought is to try and run the laptop directly off a regulated dc supply fed by a 24V battery setup, but that'll take me a little while to assemble to test.
    Any thoughts on this? Sadly I do not have a second laptop to try this all in to see if that rectifies things. Has anyone else encountered this?
    I hope I haven't left out a salient detail. If I forgot to mention something relevant please ask.
    Best regards,
    Steven Cogswell

    Hello Ben and others;
    Indeed, you hit the nail on the head, except it was the other way around. I hadn't taken notice of it before, but this laptop (which isn't mine) doesn't even have a 3-prong plug on the adapter in the first place, only 2-prong (the cord/brick have no facility for a 3rd either, and it's the original sony adapter). I ran a connection from the AIGND (AIGND/AOGND/DGND on this card are all the same point according to the manual) to the 120v ground (3rd prong) and sure enough, everything cleans up and voltages are correct again.
    And a small note for those taking notes, although I said the laptop is a "PCG-955A" (that's what's on the bottom), it's actually a Vaio FX120 (written in tiny print on the screen bezel).
    Thanks for all you
    r help,
    Steve

  • 6036e Daqcad AI voltage rails until device reset.

    Hi,
    I'm running a daq program (attached) which has some DIO, AI and AO in loop and readings on 2 of the Analog channels (#1&5) that are connected to load
    cells, rail at one end of the range set.
    A DMM attached reads the correct voltage of 2V. The strange thing is Max reads the right voltage only when the device has been reset after running the VI.
    AI channels (#0&4) connect to a Force sensor that requires excitation turned on by digital writes. AI channel #1&5 are connected to the load cell in question.
    I have tried 2 laptops, 2 daqcard-6036e on XP and driver 7.2 & 7.3 , same results. VI always rails. MAX rails only after VI is run and works fine.
    I have decreased the sample rate to 1s hardware ti
    med no effect. This programm is similar to the DAQmx example of Multi-channel PID control with some additinal digital-O & analog-I's added.
    I looked at the KB's on AI Rails but they refer to the range and gain. I have tried a range of +-10V and still see railing.
    http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/6bcac575d4f3c98386256e8e0025d0c5?OpenDocument
    http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/d797b39b1088df2886256dd30058d9bf?OpenDocument
    I would appreciate any help. As in the NI example, you can take the PId vi's out or ignore them and fake a PID output if you don't have the PID toolset installed.
    Attachments:
    tuned_pid.vi ‏336 KB

    I think we can limit this to just your analog inputs.
    First, do you know what mode your DAQ card is in for your analog inputs (differential, referenced single ended, nonreferenced single ended)?
    Second, exactly how do you have your analog inputs wired (to what pins of your DAQ board)?
    Here's why I ask. If you're using your DAQ board in differential mode, then for your first analog input, your positive goes to AI0 and negative goes to AI8. For the second analog input, it's AI1 (+) and AI9 (-), and so on.
    If you have your (+) inputs going to AI0, AI1, etc but your (-)s going to AIGND, then you're set up for single ended mode, not differential, and your signals will probably rail.
    It just might be that NI-DAQ is setting up your card for one typ
    e of input, while your program is resetting it for a different type of input.
    Mark

  • Does MAX 8.0 supports 6036e?

    I received recelntly the Labview 8.0 with the Device drivers CD's but when I tryed to install that in my moblile wich has the 6036e PCMCIA card, the card is recognized from the windows (Device Mananger) BUT does not appear in MAX 8.0?
    Yannis

    The DAQCard-6036E is supported by NI-DAQmx 8.0, as per this page.  Measurement & Automation Explorer (MAX) version 4.0 is the most recent, which is installed with NI-DAQmx 8.0.  Does your device show up in device manager as an NI data acquisition device?  In MAX, does your device show up in the NI-DAQmx Devices folder or the Traditional (Legacy) NI-DAQ folder or neither.  Did you ever go through the driver installation wizard for the device so that windows would associate NI-DAQmx as the driver for that piece of hardware.  You could try going to "Add Hardware" and letting windows look for hardware that doesn't have a driver associated with it.  A few suggestions....
    -Alan A.

  • Xp findet Treiber für DAQCard-60​36E nicht

    Wir haben DiaDEM 8.1, DAQCard-6036E, NI-DAQ-Treiber Version 6.9.3.
    Ist schonmal auf Laptop gelaufen.
    Nachdem ein Arbeitskollege USB-6009 mit zugehörigem Treiber DAQmx 8.0.1 installiert hat lief DiaDEM nicht mehr.
    DiaDEM, MAX wurden deinstalliert (Win XP: Einstellungen => Systemsteuerung => Software => Programme ändern oder entfernen) und unter c: => windows => regedit.exe (nach national instruments und usb-6009 Einträge gesucht und gelöscht).
    Nach neuer Installation läuft DiaDEM aber der Treiber für DAQCard-6036E wird nicht gefunden.
    Bei der Installation wird folgender Fehler angezeigt: „Fehler 1605 trat bei der Installation von NI-Pal 1.5.6f0 Engine auf, Mit der Installation der anderen Produkte fortfahren?“.
    Habe Installation fortgesetzt è geht nicht!!, Alles deinstalliert – noch mal installiert und bei dieser Meldung abgebrochen è geht nicht !!.
    Alles deinstalliert – Max extra installiert (Treiber-CD => max => max.msi) und NI-DAQ extra installiert (Treiber-CD => nidaq => nidaq.msi) è geht auch nicht!!.
    Auf Ihrer Homepage unter -Hardware Installation/Configuration Troubleshooter- und -Data Acquisition (DAQ) Installation/Configuration Troubleshooter- finde ich keine HILFE. Unter -http://www.ni.com/support/daq/versions_portable.h​tm#pcmcia- steht etwas über Treiber.
    Bei diesen Aktionen fiel mir auf: Beim booten findet Win XP zwar die Karte aber den Treiber nicht.
    Wo liegt der Fehler? denn es ist ja schon mal alles gelaufen
    Guth Michael
    AL-KO Geräte GmbH
    Mail: [email protected]

    Hallo Herr Guth,
    Folgende KB beschreibt den Fehler 1605 und wie man Abhilfe schaffen kann!
    Sollten Sie dann noch immer Probleme haben würde ich vorschlagen den DAQmx 8.3 zu installieren welcher sowohl die 6036 sowie das 6008 unterstützt!
    MfG Christian M

  • CWBufferedAO.zip uses 100% of CPU

    This sample program is modified as follows,
    CWAO1.UpdateClock.Frequency = 1024
    CWAO1.NUpdates = 1024
    CWAO1.ProgressInterval = 256
    When PCI-MIO-XE16-10 is used and NUpdates is the same value as Frquency, this program uses 100% of CPU and "STOP" button does not work.
    Our program was developed for DAQCard-6036E and was intended to use it on PCI-MIO-16XE-10. Because of programming requirement, the stopping process should be taken within 1 second after STOP button is clicked.
    Is there any reason why this sample program uses 100% of CPU when small value of NUpdates is applied?
    Attachments:
    CWBufferedAO.zip ‏5 KB

    My PC is DELL Precision-620 (2 CPU's) with Win2000.
    The sampling rate is 1024Hz so that 256 samples take 250ms. It is very slow operation.
    STOP button does not work and some user logics do not work in Progress routine.
    Attached a slighly modified program to show you the strange opeartion in Nidak32.sys. Are the Update count
    and Time Editbox updated on your PC?
    Attachments:
    Buffered_Analog_Output.Vbp ‏1 KB
    BufferedAnalogOutput.Frm ‏26 KB

  • How to read frequency channel & voltage channels at the same time with different rates?

    I am using a DAQCARD 6036E with a SCXI-1000 chassis.  I have a SCXI-1100 & a SCXI-1126 modules, along with a SCXI-1124 AO module for voltage output.  I am acquiring several voltage input channels with the SCXI-1100 & one frequency channel (frequency signal upto 12000Hz) with the SCXI-1126.  I want to configure all the channels in the Measurement & Automation so that I can use the Scale function to setup all the channels.  I am using a higher acquisition rate for the frequency channel than the voltage channels.  I had the block diagram as attached.  When I run the vi, I had the resource problem.  The err. msg. is: the specified resource is reserved..  Any suggestions as how to acquire a frequency signal & several voltage signals with this system setup using the DAQ-mx?
    Attachments:
    Document.rtf ‏4727 KB

    Since a common ADC is shared with all your SCXI analog input channels , you will have to sample all channels in a scan list at same sampling rate.
    However, you can set up frequency and voltage virtual channels using different instances of "create DAQmx Virtual channel" function in the same task as shown in attached pic, set scan rate and read them using a single DAQmx read
    Attachments:
    vi.PNG ‏6 KB

  • Blue screen of death with LV7.0 Eval version

    I had been programming for a couple weeks with no problems, trying to see whether or not Labview could handle the data acquisition requirements that we need for our software. Specifically, I was a little concerned that LV maybe be too "high level" to run fast enough for us. We're acquiring 2 analog voltage signals and an edge counter for X, Y and Z position values and streaming them into an intensity graph to get a picture of our scanned object. I had been using the DAQ Assistant VI to do the acquisition, which understably contains quite a bit of overhead, but the acquisition rate seemed to be okay (around 400Hz while streaming points into a 200x200 array on a DAQCard-6036E). I wasn't quite done the interpolation routine though and was afraid that would slow it down some more.
    On Monday I got my first BSOD just a couple seconds after starting my program after using the software for hours. I brushed it off as a fluke and rebooted. Yesterday late in the afternoon I got another one after working on it and testing all day, but this time it never dumped the physical memory (it just hung). Much more concerning, I got a third one today upon just loading up Labview and trying to run my program for the first time. I took down the error message as:
    IRQL_NOT_LESS_THAN_OR_EQUAL
    STOP:0X000000A(0X6D665346,0X0000002,0X0000001,0X80519E8C)
    I rebooted, and upon trying to load windows, I got a message that SYSTEM was corrupt. Our IT guy came by and rebooted again, which seemed to work alright. But then when he rebooted again we got -another- one with no "IRQL" statement. I did manage to get it going again by rebooting for a 3rd time.
    What worries me is that this is getting progressively worse. For weeks I seemed to be able to tinker at will with the software and do all sorts of neat stuff and it didn't give me any errors. I haven't installed anything new this week at all, and suddenly it's crapping out faster and faster.
    I'm using (or was, until I uninstalled everything for safety) LV Eval version installed on NI-DAQ 7.0 with a Toshiba TE2100 laptop (P4, 2GHz, 256meg). Are there any known issues with version 7.0? I had tried installing the Eval version (I guess it's really called "Express") on NI-DAQ 7.2, but the DAQ Assistant never showed up in the menu. After a few calls to NI tech support, I tried version 7.3 and it still wasn't there (and if you're new at LV, you NEED the DAQ Assistant), so I tried 7.0 and it worked fine.
    I've attached my program. It's disorganized, yes, but it does the job. Any ideas?
    Attachments:
    jan19-5pm (interpolates up & down).vi ‏585 KB

    Hi Tyler,
    I'm using WinXP Pro, and I haven't tried the program on another computer yet. The tricky thing is that it's one of those intermittent problems. So I could install it on someone else's computer and have it run fine for weeks, then crash on the last day. The BSOD did not identify any particular problem file, only the IRQ error and that it was dumping physical memory. I know they sometimes give you the exact file that the problem occurred in, but not in my case.
    I did a fair amount of searching through this forum and in general regarding Labview and conflicts with other software. I found one message from a couple years ago on another forum regarding BSOD's, and one reply asked "You're not using Sophos virus checker, are you?". Coincidentally, this is the only thing that had been installed on my computer since installing Labview. However, it was installed about 2 weeks prior to the first BSOD. Something I forgot to mention was that my laptop was taking suspiciously long to boot up. I could type in my name and password and come back 5 minutes later and it would still be loading my desktop. It wasn't until I physically removed the DAQ card and then rebooted that I saw it return to normal. I then uninstalled all NI software and reinstalled everthing, this time going with the 7.3.1 drivers. On Friday I also disabled Sophos virus checker for one day. So far it's run error-free today (even with Sophos re-enabled), and my computer booted up normal speed even with the DAQ left in it over the weekend. However, for the first time, Labview crashed on me today (performed illegal operation... do you want to send error report to microsoft etc...).
    I sure hope the problems I'm having aren't indicative of the normal operation of the software. It's been amazing to program with so far, and its unbelievably powerful, but if BSOD's and mysterious crashes are the norm, it's going to be worse than our current software (which in itself isn't very good).

  • Is there something wrong with a Oscar Gomez Fuentes 2Channel oscilloscope

    Is there something wrong with a Oscar Gomez Fuentes 2Channel oscilloscope or are my devices configured wrong. I have DAQCard-6036E and SCC-2345 with SCC-Ft01 and SCC-A10 modules. The problem is that when I am measuring with A10 it effects to FT01's RMS and DC values. But when I am measuring with Ft01 everything is ok. So what's wrong?? I attached a picture of a front panel.
    Attachments:
    front_panel.JPG ‏132 KB

    Neuvos,
    Do any of the signals you are measuring have high output impedances (> 1 kOhm)? If so, you may be seeing cross-talk (or ghosting). Below, I have included links to a number of documents that discuss this issue and methods for eliminating it:
    Data Acquisition: Troubleshooting Unexpected Voltages or Cross-talk in Analog Input Channels
    Using a Unity Gain Buffer (Voltage Follower) with a DAQ Device
    Is Your Data Inaccurate Because of Instr
    umentation Amplifier Settling Time?
    Good luck with your application.
    Spencer S.

  • Waveform data not displaying correctly

    Hello,
    I am having some trouble displaying some waveform data. 
    I have a DAQCard-6036E card, and I am bringing in 16 analog signals.  The task is set up through MAX, and right now is generic.  All I am reading is random noise in an attempt to see if my VI even works.  The acquisition mode is set to continuous, with a rate of 1k, and number of samples set to 100.
    The problem is that I get very intermittent data on the front panel indicators.  Only several of the indicators work, and its choppy at best.  What am I doing wrong here?  Is it a problem with my acquisition mode?   
    Thanks,
    Alex
    Attachments:
    DAQ (CompactDAQ).vi ‏39 KB
    DAQ.vi ‏137 KB
    DAQ (Air Data Computer).vi ‏35 KB

    Hi Alex,
    I just tried the your VI with the while loops deleted out of the
    subVI's and as far as I could see the indicators in the first two
    columns of the front panel were updated as expected. The other
    indicators are not wired so they will not update.  It might look like
    some of the wired indicators are not updating each time if the value
    doesn't change - ie the potential on the line is the same as the last
    read.  You might try putting a known signal on each of your test lines
    instead of using noise to test your system to verify that the voltage
    is changing each time.
    If you are still having trouble, post again.  I've attached a copy of the VI I used for test.
    Regards,
    Micaela N
    National Instruments
    Attachments:
    DAQ.zip ‏95 KB

  • Acquiring two input signals simultaneously

    Hi,
    I have an LVDT and a force sensor (both attached to a motor) plugged into my SC board and I am trying to read out & save the output data from both sensors simultaneously while the motor is in motion. The SC-2350 board is connected to the laptop via DAQCard 6036E . Two questions:
              1) in my block diagram, the DAQ assistant blocks for both sensors are wired to their respective waveform graphs (one for the LVDT signal and one for the force signal), the whole being included in a while loop (which remains true until the motor stops moving). When running the program, only one signal is read out at a time and I have no control on which signal is transmitted (this seems to occur in a random fashion). Do I need to "link"/synchronize the two DAQ assistants in some ways so that both signals will appear simultaneously? If so, how would it be possible?
              2) ideally, I'd like to read out both input signals on the same graph (with 2 different y-scales but the same time scale) to observe the phase shift. I tried to "bundle" the two data output coming from the two DAQ assistants but it resulted in an error saying that the two outputs were of different types. Is there a way to plot these two physical quantities on the same graph even though they're different and come from different sensors?
    Thanks for your help!
    Thibault.

    Hello Dani,
    I assume, based on your position sensors and the image you posted, that you are using counters to acquire your data. Is this correct? If so, it is important to note that you cannot add multiple counter operations to a single task because they have different timing requirements and all channels in a single task must share the same timing and triggering. In fact, if you attempt to add a second counter channel to a DAQ Assistant for a counter operation, you will receive the following error when you try to close the configuration window:
    So, you will have to use two different tasks to acquire your counter data. You can do this either by using two separate DAQ Assistants or by using two different tasks in the NI-DAQmx sub-VIs in LabVIEW. Since you are trying to compare the phase difference between the two measurements, you will also need to synchronize the operations so that you are reading from both counters simultaneously. There are several resources online which discuss synchronizing counters, and I have linked some that I thought might be useful below:
    KnowledgeBase 1JPES6LL: How Do I Synchronize a Buffered Quadrature Encoder Measurement to a Signal
    KnowledgeBase 3GGATSCC: Hardware Counter Start Trigger for Counter Synchronization
    It would probably be easier to use the NI-DAQmx Sub-VIs to implement this synchronization because they give you more control over specific parameters of your task. You can use the NI Example Finder to browse some examples which use the NI-DAQmx sub-VIs for counters. You can open the NI Example Finder by going to Help>>Find Examples... in LabVIEW. Once in the NI Example Finder, you can find the counter examples by selecting the Browse tab and navigating to Hardware Input and Output>>DAQmx>>Counter Measurements; then you can select the Position category (shown below) to find the Position measurement examples.
    Let us know if you have any other questions about taking position measurements.
    Message Edited by Matt A on 04-09-2008 03:55 PM
    Matt Anderson
    Hardware Services Marketing Manager
    National Instruments
    Attachments:
    Error -200147.jpg ‏32 KB
    Example Finder Position.jpg ‏124 KB

  • Servo motor encoder pulses/counter data erroneous

    First off, I am very new to using labview.  I am trying to complete a project a former employee was working on.
    For a quick background on what I am working with, I am using a NI DAQCard-6036E connected to a SC-2345.  The SC-2345 is then connected to a load sensor, Omron R88D servo driver, and an omron servo motor.  The servo motor has a incremental encoder with a resolution of around 2048 pulses per revolution.  My labview program includes a counter that records the data from the encoder on the servo motor.  I have been able to get accurate data when testing through the measurement and automation program by manually turning the motor.  Also when running through the specific DAQ assistant I am using for my counter, I am getting correct readings when manually turning motor.  Once I run my complete program, instead of getting 2048 pulses per revolution, I am getting between 34000-36000 pulses per revolution.  The most logical assumption is that I am getting vibration in the motor itself or some sort of noise is interfering with my signal.  I first attempted to change any possible settings through the omron servo driver that might reduce any vibration in the motor.  I attempting changing the rigidity settings, turning on and off the auto-tuning function, and a few other settings specified by the user manual that might cause vibration.  If I turn the rigidity settings as low as possible, I am able to get around 2000 pulses per revolution, but the data is very sporadic. Also, my equipment needs to be very rigid, and with the lowest rigidity setting for the servo driver, I am able to almost stop the motor with minimal force.  My equipment needs to be able to travel at a near constant speed with fluctuations of up to 200 N of force.  Any suggestions on which direction I should go in finding a countermeasure? 
    Thanks
    Solved!
    Go to Solution.

    The model number of the servo motor is R88M-W10030L.  The servo motor rotates at a constant speed.  The program is designed to drive the servo motor connected to a ball screw in one direction.  Once the load sensor reaches a desired load, it will reverse at a constant speed until no load is on the sensor. Throughout, it records load vs. displacement.   I have found a few things that will alter the pulses counts.  If you apply resistive pressure on the servo motor while it is rotating, the pulse output will vary.  Also when you apply pressure to the casing of the servo motor itself, the pulses will often jump around. I was almost certain my false pulses were caused by vibration.  After have no success adjusting settings to reduce vibration(according to the user manual), I ran the program while moving around several wires to see if any were loose, etc... After applying force to the power lines and encoder cable, the program ran several times with an average of 2000 pulses per revolution and would actually subract pulses while going in reverse(what I want it to do); Although the average was around 2000 pulses per revoltion, i saw positive and negative jumps in pulse counts while traveling forward at constant speed.  Today I re-wired the equipment, seperating as many wires as possible.  After the re-wire, the equipment/program is back to sending 34000+ pulses per revolution, and does not subract pulses while reversing.  I have read the 'Using Quadrature Encoders with E Series DAQ Boards' article.  Referring to the article, I am running similar to "method 1".  I am already using a signal conditioning box, but have the counter data run directly through. Do you believe running the signals through a SCC-CTR01 might solve the problems? 

Maybe you are looking for

  • Move one DHCP scope at a time

    I'm migrating from a Windows 2003 server running DHCP to a Windows 2012 server with DHCP.  I would like to do the migration by moving a single scope at a time.   I'm only finding ways to move the entire scope.  Can you move a single scope at a time? 

  • Does having a guest network decrease the power of the main network?

    I set up a guest network when I set up my Airport Extreme ... not sure why I did this exactly. I have moved into a larger home and now have some issues with the router reaching the entire unit. (It does actually cover it all ... but very weak in the

  • Query in RRI / Jump Targets

    Hello Gurus, I am querying on the FI-Ageing Analysis. The report is from an infocube. I have created a jump target, which in turn goes an ODS Query. In an ageing report, I have a total Overdue Balance (OB),OB between 0-30 days, OB between 30-60 and 6

  • WSDL exception faultcode=other_error the wrapper procedure can not be found

    HI', I am invoking a stored procedure from Oracle SOA 11G using DB adapter. in the final screen I get error: "Error while writing WSDL file xxxx.wsdl. Exception: WSDLException: faultCode=OTHER_ERROR: The wrapper procedure, ICC_TEST$PROCEDURE_RECORD_S

  • I just want to calculate a total

    Hi, I'd like to calculate automatically total of 2 inputValues. For sure i need to set the autoSubmit="true" on both of my inputValues + partialTriggers on the TOTAL one. What I'd like to know is what is the best practise on how to hanlde the result