PXI-6515

I have a failure on Port 6 of my PXI-6515 caused, I believe, by pins 49 & 50 on the cable lead from my SBC-100 being out of line.  They therefore did not engage with the PXI card properly shorting themselves together and maybe connecting with one of the pins in the PXI card.  From my tests within MAX the other output ports, (4, 5 & 7), are okay but port 6 now looks like it is floating and not operating as an output.  Does any one know if we can repair the card?  Or is the card now a right off?  There appears to be two fuses which are okay.
From a hardware point of view can any one explain what has happened.
Any feedback appreciated.
Steve
Solved!
Go to Solution.

Hi Steve,
Thanks for posting.
We offer a number of repair sevices depending on your choice, see the link below for more information.
NI Repair Services - http://www.ni.com/services/repair.htm
The first step in getting the product repaired is to contact your local NI branch where we will run through the RMA (Return Merchandise Authorization) process with you. After this you will receive a quote for repair from our customer services department.
Without having the card on front of me it's hard to say what's gone wrong with it. I can't imagine a short between pins 49 and 50 causing a problem unless there was some very high voltage running through them. Looking at the pin assignments for the card (manual - http://www.ni.com/pdf/manuals/372172b.pdf), these pins are a +5V supply and ground. Have you tried using the device with another cable? If a particular cable causes a short between these two lines I could imagine seeing the behaviour you are experiencing.
Best regards,
Paul
I accept Kudos and BTC Tips :-D
Address: 1NyXnWf9kdjVzjWcW5w4P1V3b1EY4yXP12

Similar Messages

  • Collector Emitter Saturation Voltage on Output of NI 6515

    The outputs on the NI PXI-6515 are an open collector with a Darlington pair transistor. In my application the emitter is tied to ground and a 10k resistor is tied between the collector and +5VDC. A measurement is taken at the collector to determine Hi or Low (TTL values). 
    NI does not publish the collector emitter voltage on their darlington transistor outputs on the NI 6515 when it is fully saturated. What is the full saturation current and what is the full saturation collector emitter voltage?
    Thanks
    Mike

    Hi mdricken,
    Thanks for posting to the forums. I think this document has the specs you're looking for:
    Expected Voltage Drop on the Digital Output Lines of NI-651x Devices:
    http://digital.ni.com/public.nsf/allkb/8D46388A20C5A23E862572B8007E8C82
    AC - LV Solver

  • Connecting the SCB-100 to a Solid State Relay

    Hey everyone,
    I am trying to hook up a solid state relay to the SCB-100 which is connected to a PXI 6515 device. I was just wondering where I can purchase the connecting wire that is ideal for this purpose. A quick look online has yielded few helpful results. A link to grainger/zoro tools or others would be appreciated. The specifications are: The wire must be able to withstand 25V voltage, 25A current. It should also be easy to wire it into the SCB-100. Any AWG will do. 
    Thanks!

    Hi tompkins92,
    Here is some information about AWG cables and their maximum current:
    http://diyaudioprojects.com/Technical/American-Wir​e-Gauge/
    Here is a place you might be able to purchase the appropriate wire:
     http://www.mouser.com/Wire-Cable/Hook-up-Wire/_/N-​5ggs
    You can select the voltage and current specifications you want.
    I have one question: did you mean 25 mA current? Neither the SCB-100 nor the PXI 6515 can safely handle 25  amps.
    http://www.ni.com/pdf/manuals/371224b.pdf
    http://www.ni.com/pdf/manuals/372199c.pdf
    Regards,
    Kelsey Johnson
    Applications Engineer
    National Instruments
    http://www.ni.com/support
    (866) 275-6964

  • How to delete the whole array programmatically

    Hello,
    Is it possible to destroy/kill an array programmatically (not delete subset)?  If yes, please tell me how.  THanks in advance.
    Sugento

    Hi Devchander,
    Thank you for your reply.  I'm really surprised.  Are you always working?   By emptying an array, it means LabVIEW still has that particular array in memory, doesn't it? what if I want to completely destroy the array to prevent memory issue?
    I have a PXI-6515, an industrial 24V DIO card, that I used to monitor optical sensors.  I just found out after I hooked it up that the output from DAQmx for DIO is a 1-D boolean array.  I like to be able to plot the digital signals on the Digital Waveform Graph, and to do that I have to use shift register and concactenate the previous array with the newly read digital signal.  I only want to keep the array when a certain criteria is met.  When it's no longer met, I want to "destroy" the array.  Attach is a sample VI.  Is this an efficient way to plot a Digital signal?  also, since i'm using 24V, i can't use the TTL counter.  Is there a VI readily available to count signal highs and lows? I have made a VI that specifically does this but I'm just wondering if there is LabVIEW function that does it.  Thanks.
    Peter
    Attachments:
    testing Digital signal array.vi ‏33 KB

  • Detect DIO line change on PCI-6250

    Using Visual C++ how can I detect when a digital line goes high on a NI PCI-6250 M-series board? I want to make a loop that will cycle each time a digital input goes high (with millisecond or better accuracy).
    Thanks,
    Mark

    Actually, there are two ways to do this, if your card supports Digital Change Detection timing. Unfortunately the PCI-6250 does not, but the following cards do: PCI-6509, PXI-6509, PCI-6510, PCI-6511, PXI-6511, PCI-6514, PXI-6514, PXI-6514, PCI-6515, PXI-6515, PCI-6518, PCI-6519, PCI-6527, PXI-6527, PCI-6528, PXI-6528, PXI-6533, PCI-6533, PXI-6534, PCI-6534, and perhaps others.
    In any case, if you do have one of these cards with hardware support for Digital Change Detection timing, then you can do one of the following:
    a) Create a DI channel on your task, and then configure digital change detection timing using CNiDAQmxTiming::ConfigureChangeDetection(). Then perform either synchronous or asynchronous single-sample reads; when one of the digital lines of interest changes, then the blocking synchronous read will finish or the asynchronous read will call your callback. This is demonstrated with the Digital \ Read Values \ ReadDigChan_ChangeDetection example.
    b) Create a DI channel on your task, configure digital change detection timing, and then set up a Digital Change Detection event on the task. This is demonstrated with the Digital \ Read Values \ ReadDigChan_ChangeDetection_Events example, although (a) is the preferred method.
    Since the PCI-6250 doesn't support hardware digital change detection, what you have to do instead is to continuously read the status of the digital lines, and check to see if the lines went high programmatically. If you do not explicitly configure the timing for the acquisition, then the timing mode will be on-demand and the data will be read as fast as possible. You also have the option of calling CNiDAQmxTiming::ConfigureSampleClock() to configure the clock rate based off of another signal, such as PFI7. Here's a sample:
    try
    CNiDAQmxTask t("");
    t.DIChannels.CreateChannel("6250/port0/line7:0", "", DAQmxOneChannelForAllLines);
    t.Timing.ConfigureSampleClock("/6250/PFI7", 0,
    DAQmxSampleClockActiveEdgeRising,
    DAQmxSampleQuantityModeContinuousSamples, 1);
    CNiDAQmxDigitalSingleChannelReader reader(t.Stream);
    bool stopLoop = false;
    while (!stopLoop)
    CNiBoolVector data;
    reader.ReadSingleSampleMultiLine(data);
    for (unsigned int i = 0; i < data.GetSize(); i++)
    if (data[i])
    MessageBox("Went high.");
    stopLoop = true;
    break;
    catch (CNiDAQmxException* ex)
    ex->ReportError();
    ex->Delete();
    Hope this helps,
    Hexar Anderson
    Measurement Studio Software Engineer
    National Instruments
    Hexar Anderson
    Measurement Studio Staff Software Engineer
    National Instruments

  • Poor PXI IO performanc​e on Latitude E6410 with ExpressCar​d 8360

    Hello,
    I have a Dell Latitude E6410 with a Core-i5 M520 which is giving me very poor io performance when using an ExpressCard 8360 card to connect to a PXI Rack.
    The sustained IO rate that I can get appears to be about 1/3 of that that I can get using the same ExpressCard on a Dell Latitude E6400 (with a Core2Duo processor).
    I am using the A05 bios (latest at time of writing) on the E6410.
    Wade.

    I am running Windows XP (32 bit) sp3 in both cases.
    The E6410 has 4GByte of memory fitted.
    The E6400 has 2GByte of memory fitted.
    I have also use the same ExpressCard 8360 via a PXIe to ExpressCard Adaptor in a Desktop machine with similar performance figures to the E6400 - i.e. much better than the E6410.
    The Desktop Machine is an HP Compaq D7900 with 4GByte of memory, Core2Duo E8500 also running Windows XP sp3 (32 bit).
    Also, on the Desktop, I am running NI PXI Platform Services 2.3.2 and NI-Visa runtime version 4.3.
    On the E6410, I am running NI PXI Platform Services 2.5.2 and NI-Visa runtime version 4.6.
    I no longer have access to the E6400 so I am not sure what sofware versions were installed. However, they are unlikely to be new than the versions installed on the E6410.
    Wade.

  • Choosing a PXIe controller for streaming 200 MBps

    Warning:  This is a long post with several questions.  My appologies in advance.
    I am a physics professor at a small liberal-arts college, and will be replacing a very old multi-channel analyzer for doing basic gamma-ray spectroscopy.  I would like to get a complete PXI system for maximum flexability.  Hopefully this configuration could be used for a lot of other experiments such as pulsed NMR.  But the most demanding role of the equipment would be gamma-ray spectroscopy, so I'll focus on that.
    For this, I will need to be measuring either the maximum height of an electrical pulse, or (more often) the integrated voltage of the pulse.  Pulses are typically 500 ns wide (at half maximum), and between roughly 2-200 mV without a preamp and up to 10V after the preamp.  With the PXI-5122 I don't think I'll need a preamp (better timing information and simpler pedagogy).  A 100 MHz sampling rate would give me at least 50 samples over the main portion of the peak, and about 300 samples over the entire range of integration.  This should be plenty if not a bit of overkill.
    My main questions are related to finding a long-term solution, and keeping up with the high data rate.  I'm mostly convinced that I want the NI PXIe-5122 digitizer board, and the cheapest (8-slot) PXIe chassis.  But I don't know what controller to use, or software environment (LabView / LabWindows / homebrew C++).  This system will likely run about $15,000, which is more than my department's yearly budget.  I have special funds to accomplish this now, but I want to minimize any future expenses in maintenance and updates.
    The pulses to be measured arrive at random intervals, so performance will be best when I can still measure the heights or areas of pulses arriving in short succession.  Obviously if two pulses overlap, I have to get clever and probably ignore them both.  But I want to minimize dead time - the time after one pulse arrives that I become receptive to the next one.  Dead times of less than 2 or 3 microseconds would be nice.
    I can imagine two general approaches.  One is to trigger on a pulse and have about a 3 us (or longer) readout window.  There could be a little bit of pileup inspection to tell if I happen to be seeing the beginning of a second pulse after the one responsible for the trigger.  Then I probably have to wait for some kind of re-arming time of the digitizer before it's ready to trigger on another pulse.  Hopefully this time is short, 1 or 2 us.  Is it?  I don't see this in the spec sheet unless it's equivalent to minimum holdoff (2 us).  For experiments with low rates of pulses, this seems like the easiest approach.
    The other possibility is to stream data to the host computer, and somehow process the data as it rolls in.  For high rate experiments, this would be a better mode of operation if the computer can keep up.  For several minutes of continuous data collection, I cannot rely on buffering the entire sample in memory.  I could stream to a RAID, but it's too expensive and I want to get feedback in real time as pulses are collected.
    With this in mind, what would you recommend for a controller?  The three choices that seem most reasonable to me are getting an embedded controller running Windows (or Linux?), an embedded controller running Labview real-time OS, or a fast interface card like the PCIe8371 and a powerful desktop PC.  If all options are workable, which one would give me the lowest cost of upgrades over the next decade or so?  I like the idea of a real-time embedded controller because I believe any run-of-the-mill desktop PC (whatever IT gives us) could connect and run the user interface including data display and higher-level analysis.  Is that correct?  But I am unsure of the life-span of an embedded controller, and am a little wary of the increased cost and need for periodic updates.  How are real-time OS upgrades handled?  Are they necessary?  Real-time sounds nice and all that, but in reality I do not need to process the data stream in a real-time environment.  It's just the computer and the digitizer board (not a control system), and both should buffer data very nicely.  Is there a raw performance difference between the two OSes available for embedded controllers?
    As for live processing of the streaming data, is this even possible?  I'm not thinking very precisely about this (would really have to just try and find out), but it seems like it could possibly work on a a 2 GHz dual-core system.  It would have to handle 200 MBps, but the data processing is extremely simple.  For example one thread could mark the beginnings and ends of pulses, and do simple pile-up inspection.  Another thread could integrate the pulses (no curve fitting or interpolation necessary, just simple addition) and store results in a table or list.  Naievely, I'd have not quite 20 clock cycles per sample.  It would be tight.  Maybe just getting the data into the CPU cache is prohibitively slow.  I'm not really even knowledgeable enough to make a reasonable guess.  If it were possible, I would imagine that I would need to code it in LabWindows CVI and not LabView.  That's not a big problem, but does anyone else have a good read on this?  I have experience with C/C++, and some with LabView, but not LabWindows (yet).
    What are my options if this system doesn't work out?  The return policy is somewhat unfriendly, as 30 days may pass quickly as I struggle with the system while teaching full time.  I'll have some student help and eventually a few long days over the summer.  An alternative system could be built around XIA's Pixie-4 digitizer, which should mostly just work out of the box.  I prefer somewhat the NI PXI-5122 solution because it's cheaper, better performance, has much more flexability, and suffers less from vendor lock-in.  XIA's software is proprietary and very costly.  If support ends or XIA gets bought out, I could be left with yet another legacy system.  Bad.
    The Pixie-4 does the peak detection and integration in hardware (FPGAs I think) so computing requirements are minimal.  But again I prefer the flexibility of the NI digitizers.  I would, however, be very interested if data from something as fast as the 5122 could be streamed into an FPGA-based DSP module.  I haven't been able to find such a module yet.  Any suggestions?
    Otherwise, am I on the right track in general on this kind of system, or badly mistaken about some issue?  Just want some reassurance before taking the plunge.

    drnikitin,
    The reason you did not find the spec for the rearm time for
    the 5133 is because the USB-5133 is not capable of multi-record acquisition.  The rearm time is a spec for the reference
    trigger, and that trigger is used when fetching the next record.  So every time you want to do another fetch
    you will have to stop and restart your task. 
    To grab a lot of data increase your minimum record size.  Keep in mind that you have 4MB of on board
    memory per channel. 
    Since you will only be able to fetch 1 record at a time,
    there really isn’t a way to use streaming. 
    When you call fetch, it will transfer the amount of data you specify to
    PC memory through the USB port (up to 12 MB/s for USB 2.0 – Idealy).
    Topher C,
    We do have a Digitizer that has onboard signal processing
    (OSP), which would be quicker than performing post processing.  It is
    the NI 5142
    and can perform the following signal
    processing functions.  It is
    essentially a 5122 but with built in OSP. 
    It may be a little out of your price range, but it may be worth a
    look. 
    For more
    information on streaming take a look at these two links (if you havn’t
    already). 
    High-Speed
    Data Streaming: Programming and Benchmarks
    Streaming Options for PXI
    Express
    When dealing with different LabVIEW versions
    it is important to note that previous versions will be compatible with new
    versions; such as going from 8.0 to 8.5. 
    Keep in mind that if you go too far back then LabVIEW may complain, but
    you still may be able to run your VI.  If
    you have a newer version going to an older version then we do have options in
    LabVIEW to save your VI for older versions. 
    It’s usually just 1 version back, but in LabVIEW 8.5 you can save for
    LabVIEW 8.2 and 8.0.
    ESD,
    Here is the link
    I was referring to earlier about DMA transfers.  DMA is actually done every time you call a
    fetch or read function in LabVIEW or CVI (through NI-SCOPE). 
    Topher C and ESD,
    LabVIEW is a combination of a compiled
    language and an interpreted language. 
    Whenever you make a change to the block diagram LabVIEW compiles
    itself.  This way when you hit run, it is
    ready to execute.  During execution LabVIEW
    uses the run-time engine to reference shared libraries (such as dll’s).  Take a look at this DevZone article about
    how LabVIEW compiles it’s block diagram (user code). 
    I hope all of this information helps!
    Ryan N
    National Instruments
    Application Engineer
    ni.com/support

  • Convert PXIe-8135 controller to dual-boot Windows 7 and LabVIEW RT

    Hello. I have a PXIe-8135 controller that originally was just running Windows 7. We are trying to convert it to a dual boot system to also run LabView Real Time. (There is host computer that will run LabVIEW 2014 with the RT module, and the controller will become a target).
    I have created a FAT32 partition on the hard drive of the controller. Now, I’m trying to install the real-time OS with a USB flash drive made using the MAX utility, but I cannot boot using the USB drive for some reason. I keep getting the message “waiting for USB device to initialize”.  
    In BIOS, legacy USB support is [ENABLED] and boot configuration is set to [Windows/other OS]. I’ve tried removing the drive, waiting, and reinserting. I’ve tried two different USB drives (both 8 GB, different brands).
    I’m not sure what to do next. Apart from the USB boot issue, is converting the PXIe-8135 even possible?  I read about SATA/PATA hard drive issues with older controllers, but I don't know about this one.
    Thanks, in advance, for your help!
    -Jeff
    Solved!
    Go to Solution.

    Per Siana's licensing comment, more information on purchasing a deployment license if you do not have one for this target can be found here.
    The RT Utility USB key is used to set up non-NI hardware with LabVIEW Real-Time, but you should not need it in this situation to convert to dual-boot (*). Try this:
    1. Since you already have a FAT32 partion created, go into BIOS setup and change to booting 'LabVIEW RT'.
    2. The system will attempt to boot LabVIEW RT, see that the partition is empty, and switch over into LabVIEW RT Safe Mode. (this safemode is built into the firmware, which is why you don't really need the USB key).
    3. The system should come up correctly and be detectable from MAX, and you can proceed with installing software.
    4. To switch back to Windows, go back to BIOS setup and choose 'Windows/Other OS'
    (*) One area where the USB key is helpful on a dual boot system is with formatting the disk. If you want to convert from FAT32 to Reliance on the partition designated for LabVIEW RT, the USB key lets you attempt to format a single parition and leave the rest of the disk untouched. If you format from MAX, the standard behavior is to format only one RT partition if found, but if not found, it will format the entire disk.  Formatting from MAX on a dual boot system is consequently riskier and you could lose your Windows partition.

  • Start and Stop Trigger using PXI-6120 and DigitalSta​rtAndStopT​rigger.vi not working :-(

    Hello,
    I've been trying for a while now to get my PXI unit to capture a waveform between a Start and Stop (Reference) Trigger using the NI example DigitalStartAndStopTrigger.vi downloaded from the NI website. However, whilst the start trigger and stop trigger seem to be working i.e. the VI runs and stops at  the correct times there is never any data read from my DAQmx compatible PXI-6120 card. So I can see the VI is running around the aquisition loop but the Property Node AvailSampPerChan is always returning zero... this has me slightly puzzled. I thought this might just be a driver issue so I've updated my box to the following software versions (see below) and installed the latest drivers e.g. DCDNov07.exe (also from the NI site) but nothing has changed.
    my software as of now.
    Labview 7.1 (with the 7.1.1 upgrade applied)
    Max 4.3.0.49152
    DAQmx 8.6.0f12
    Trad DAQ 7.4.4f7
    before I updated I had the same problem but with the following versions:
    Labview 7.1 (with the 7.1.1 upgrade applied)
    Max 4.2.1.3001
    DAQmx 8.5.0f5
    Trad DAQ 6.9.3f4
    So to cut a long story short I still have the same problem with the triggers... does anybody have any ideas what is going wrong?
    To add insult to injury it the traditional DAQ example ai_start-stop_d-trig.vi was almost working correctly before I did the upgrade. It had the strange behaviour of capturing the AI0 channel but on the wrong edges (e.g. if I set Start on Rise and Stop on Fall it would do the opposite, Start on Fall and Stop on Rise).
    I'm going to leave my box doing a mass compile over night but i'd really like it if someone could suggest a solution or point me in the right direction.
    Many thanks,
    Mike

    Hi Graham
    I'm out of the lab today but I'll try and answer your questions as best I can...
    1) What are the values you have set for Buffer size, Rate, samples per read and post trigger Samples?
    At the moment I have all the values (e.g. sample rate, buffer size etc) unchanged apart from the ones I mentioned in my previous post (see above). I have in the past played around with changing the buffer sizes and rates in the example VI but as this appeared to have no effect on the behaviour I now have them setup as in the download.
    2) Does the program end after the stop trigger is implemented?
    Yep, if I toggle the trigger line high then low I see the program exits the read loop and the VI stops running as expected.
    3) Lastly can you give me the details of triggering method. Are you
    using a digital train of users set digital pulses? how long is the
    program running.I'm using the WriteDigChan.vi to manually toggle the first digital line of the PXI-6733 card which is wired directly to PFI0 of the PXI-6120 card. Generally, I just start the VI running  and then toggle the line high, wait a couple of seconds and then toggle it low.
    To me it all looks like it should be acquiring samples but as I said yesterday it just refuses to fill the buffer with any data (and hence no samples are read).
    Any ideas? and thanks for you help,
    Mike

  • Trouble capturing waveform from PXI-4472

    I'm really a very green newbie at this stuff, so bear with me...
    I've got a PXI-4472 data acquisition board and a PXI-5411 waveform generator. I've connected the arbitrary out of the 5411 to the channel 0 in on the 4472. An external oscilloscope shows a 1v-amplitude sine wave being generated.
    I created a very simple VI to show what the 4472 is capturing. It connects a NI-DAQ channel I generated to the standard "AI Acquire Waveform.vi", then out to a Waveform Chart, all within a while loop with a Stop button. Problem is, all the waveform chart seems to be showing is the running average of the waveform instead of the form itself (solid line, a tad above zero).
    I can hook the 4472 input channel up to a DC-out power supply, a
    nd when I vary the voltage, the waveform chart changes as well.
    So my question (whew!): What's wrong here that's not allowing me to capture a waveform from the 4472 (in turn from the 5411) and display it on my waveform chart?
    Thanks in advance for the help.

    Never mind.... it was a sample rate problem. I upped the sample rate and it came out ok.

  • Triggerring PXI-4110 to measure 1 current value while HSDIO PXI-6552 generating waveform

    Hi,
    Some question about PXI-4110 to measure current while PXI-6552 is generating the waveform. 
    1. Let say, I need to measure 3 points of current values, i.e. while PXI-6552 is generating sample-1000, 2000 and 3500. On the edge of sample 1000,2000 and 3500, the PXI-6552 will send a pulse via PFI line or via PXI backplane trigger line. My question is, is it possible to trigger PXI-4110 (hardware trigger or software trigger) to measure current values at these points ?
    2. Let say I need to measure the current on 0ms (start of waveform generation by PXI-6552) , 1ms, 2ms, 3ms, 4ms... and so on for 1000 points of measurement, code diagram as shown at the figure below. It is possible for the VI "niDCPower Measure Multiple" to measure exactly at 1ms, 2ms, 3ms .. ? How much time will have to spend to complete acquire 1 point of measurement by "niDCPower Measure Multiple" ?
    Thanks for viewing this post. Your advice on hardware used or software method is much appreciated. Thanks in advance.  
    Message Edited by engwei on 02-02-2009 04:24 AM
    Attachments:
    [email protected] ‏46 KB

    Hi engwei,
    1. Unfortunately, the 4110 does not support hardware triggering. Therefore you cannot implement direct triggering through the backplane or anything like that. However, there are a couple of possible workarounds you can try:
    a) Use software triggering: Say your 6552 is generating in one while loop, and your 4110 is to measure in another while loop. You can use a software syncrhonization method like notifiers to send a notification to your 4110 loop when your 6552 has generated the desired sample. This method, however, will not be very deterministic because the delay between the trigger and the response depends on your processor speed and load. Therefore, if you have other applications running in the background (like antivirus) it will increase the delay.
    b) Use hardware triggering on another device: If you have another device that supports hardware triggering (like maybe an M-series multifunction DAQ module), you can configure this device to be triggered by a signal from the 6552, perform a very quick task (like a very short finite acquisition) then immediately execute the DCPower VI to perform the measurement. The trigger can be configured to be re-triggerable for multiple usage. This will most likely have a smaller time delay then the first option, but there will still be a delay (the time it takes to perform the short finite acquisiton on the M-series). Please refer to the attached screenshot for an idea of how to implement this.
    2. To make your 4110 measure at specific time intervals, you can use one of the methods discussed above. As for how long it will take to acquire 1 measurement point, you may find this link helpful: http://zone.ni.com/devzone/cda/tut/p/id/7034
    This article is meant for the PXI-4130 but the 4110 has the same maximum sampling rate (3 kHz) and so the section discussing the speed should apply for both devices.
    Under the Software Measurement Rate section, it is stated that the default behavior of the VI is to take an average of 10 samples. This corresponds to a maximum sampling rate of 300 samples/second. However, if you configure it to not do averaging (take only 1 sample) then the maximum rate of 3000 samples/second can be achieved.
    It is also important to note that your program can only achieve this maximum sampling rate if your software loop takes less time to execute than the actual physical sampling. For example, if you want to sample at 3000 samples/second, that means that taking one sample takes 1/3000 seconds or 333 microseconds. If you software execution time is less than 333 microseconds, then you can achieve this maximum rate (because the speed is limited by the hardware, not the software). However, if your software takes more than 333 microseconds to execute, then the software loop time will define the maximum sampling rate you can get, which will be lower than 3000 samples/second.
    I hope this answers your question.
    Best regards,
    Vern Yew
    Applications Engineer, NI ASEAN
    Best regards,
    Vern Yew
    Applications Engineer
    Attachments:
    untitled.JPG ‏18 KB

  • Problems performing offset null and shunt calibration in NI PXI-4220

    I am using a 350 ohm strain gage for the measurements, i have already create a task in MAX, when i want to perform offset null in the task, the program shows a waiting bar and the leds in the 4220 board start to tilting, but when the waiting bar stops, MAX gets blocked. it has been impossible to me to perform the offset null, what can i try?.
    which will be the correct values for the parameters beside the gage parameters for the strain measures?

    Hello,
    Thank you for contacting National Instruments.
    Usually when this problem occurs, it is do to incorrect task configuration or incorrectly matched quarter bridge completion resistor. Ensure that you have the correct Strain Configuration chosen. The default is Full Bridge I. If you only have a single strain gauge in your configuration, you will need to change your configuration. Also ensure that if your are using a quarter bridge completion resistor make sure that it is 350Ohm not 120Ohm. If the resistor if 120Ohm you will more thank likely not be able to null your bridge.
    Please see the PXI-4220 User Manual for more information about your configuration and signal connections: http://digital.ni.com/manuals.nsf/websearch/F93CCA9A0B4BA19B86256D60
    0066CD03?OpenDocument&node=132100_US
    Also, you can download and install the latest NI-DAQ 7.2 driver: http://digital.ni.com/softlib.nsf/websearch/50F76C287F531AA786256E7500634BE3?opendocument&node=132070_US
    This 7.2 driver has a signal connections tab displayed when configuring your DAQmx Task which show you how to correctly connect your signals.
    Regards,
    Bill B
    Applications Engineer
    National Instruments

  • Memory upgrade on PXI-8105 and PXI-8106 controller​s

    Hi,
    I've recently upgraded the memory of three PXIs; one with a PXI-8105 controller and two with PXI-8106 controllers.  Both the 8105 and 8105 can take a maximum of 4GB (2x2GB) of DDR2-677 (PC2-5300) memory (see links below).  However, on all three systems, both the BIOS and the O/S only see 3.3GB.  Any idea why this might be the case?
    I've tried flashing the BIOS (v1.4 on both PXIs), but with no success.
    We're using COTS memory (i.e. not bought from NI) but I'd be hard pushed to believe that that is the cause of the problem.
    Thanks.
    Links;
    Max memory capacity of PXI-8105  http://sine.ni.com/nips/cds/view/p/lang/en/nid/202​630
    Max memory capacity of PXI-8106  http://sine.ni.com/nips/cds/view/p/lang/en/nid/203​442
    BIOS upgrade page: http://digital.ni.com/public.nsf/allkb/9C9362590B0​5CD6E86256B270082164A
    Solved!
    Go to Solution.

    Are these controllers running Windows? If so, this is normal expected behavior. I think it might have something to do with the fact that Windows reserves the rest of the memory for driver addressing (could be totally wrong there). The same thing happens on my Dell desktop PC with 4GB of memory in it.
    Jarrod S.
    National Instruments

  • How can I get the Conditoned output from PXI 1520 in PXI 1011 combined Chassie?

    Respected Sir,
    I am using PXI-SCXI combined Chassie PXI 1011 for my application. I have placed three SCXI 1520 modules, a motion card PXI 7352 and PXI 6052E DAQ card in the combined chassie. You know the PXI 1520 and PXI 6052E are connected internally using the Backplane of the SCXI and is not user accessable. Now I need the conditioned output of the PXI 1520 to be used as an Analog input for the Motion Control card PXI 7352. How can I do that? Whether PXI 1180 could solve my problem? If so, how do I connect the PXI 1180 to PXI 1011?
    Kindly clarify me as soon as possible.
    Thanking you,
    Ramkumar. D

    Dear Sir,
    I have already placed my DAQ card at the correct place and configured it. I need some more clarification from you. I have attached my Query in .txt format.
    Kindly reply as soon as possible.
    Thanks,
    Ramkumar. D
    Attachments:
    Clarification.txt ‏2 KB

  • How to connect a Compact RIO to a PXI System

    Hello,
    I want to connect my cRIO-9074 with my PXI System.
    First of all my Hardware configuration:
    1. "PXI-1036" Chassis with "PXI-8101" Embedded Controller and "PXI-8231" Ethernet
    2. "cRIO 9074" Integrated 400 MHz Realt-Time Controller
    3. "NI 9144" EtherCAT slave chassis
    The PXI System is connected with my Network through the Ethernet-Port on the "PXI-8101" Controller. The "NI 9144" is connected with the "cRIO 9074" through their EtherCAT Ports. That works without a problem.
    Now I want to connect my "cRIO-9074" with my PXI-System through the Ethernet Port on the "PXI-8321", but I can't find a way to get this work.
    In MAX I configured the second Ethernet Port on the PXI as "TCP/IP" with a IP and Subnet. Then I connected the cRIO and turned it on.
    But I can't find the cRIO anywhere in MAX or in a LabView Realtime Project.
    When I configure the "PXI-8321" as a EtherCAT Port, I can connect my "NI 9144" with it and use it in a LabView Project. But configured as Ethernet and connected with the "cRIO 9074" it doesn't works.
    Is there any way to get this working? Or is this not possible with my Hardware?
    I know I could connect the cRIO and PXI through a Switch with my Network and then use both in a LabView Real-Time Project. But I want to build a mobile Measurement-Station with as few devices as possible. If there is no other way I will use a Switch, but without would be better.
    Thankful
    Daniel Löffler

    Hi Daniel,
    it is not possible to connect cRIO-9074 Controller with the PXI System through the Ethernet-Port, because the RT cRIO Controller can not behave as a slave. cRIO Controller is always configured as a master, never as slave. The extension Chassis 9144 is configured as a slave, so you can use it with your cRIO Ctrl or with PXI Ctrl as well. Connection PXI-Ctrl, cRIO-Ctrl, NI 9144 is not possible.
    Best regards,
    ENIA
    NI Germany

Maybe you are looking for

  • Error on deploying Web Service on Weblogic Server

    Hi All,   I am developing a webservice from a pl/sql package using Jdev11g create webservice from pl/sql wizard.   It generates the wsdl, related xmls and other java files correctly and there is no compilation error.   But when i try to deploy the we

  • Can not open a fmb file created in W2000 Server in XP

    I create some fmb files in windows 2000 server, Developer 2000 Forms 6i & Patch 17, this forms work fine, but when i tried to open in Same release in Windows XP forms shutdown abnormaly with the next message "Oracle Forms Designer has encountered a p

  • Having major issues with update 8.2 for my iPad 2.  Can I undo upgrade?

    JJust upgraded to 8.2 for my iPad 2.  Huge problems!  Slow, freezing pages.  Cannot use my locaL public library web site.  Keeps moving me out of web pages, etc.  Can I undo this upgrade?

  • Weblogic 7.0, Oracle9i - "Access not allowed" error

    All: I am looking for any suggestions/help/advice/etc on my problem. The background is this, I have an application that uses 3 MDBs, 3 Entity Beans, is deployed on Weblogic7.0 (the latest), and needs to query an Oracle9i database. I have set up my dr

  • Installation problems - now at ramfs shell

    I am having problems with removing packages from ArchLinux. Sometimes I skipped over a procedure I should have done whilst following the documentation and I tried to remove the package and re-installed it, and unfortunately broke the system. For exam