PXI MTBF

I plan to deploy 10 PXI-4472's in a PXI chassis with chassis controller where maintenance access will be difficult and expensive. I would like to identify the most reliable chassis and controller and gain some insight into the predicted failure rate of such a system.
What about the presence of the hard disk in the controller? Are there NI solid state replacements for this disk?

It sounds like you're going to need a chassis over 8 slots. In this case, I'd recommend the 14 or 18 slot chassis (1044 or 1045 respectively) as adding additional MXI extensions will add more expense and more components. For the controller, I would recommend a PXI-8196 for an embedded option or MXI-Express if you're hoping to stream back to a PC with fast hard drives. The PXI-819x series of controllers use the latest manufacturing and design improvements for increased performance and reliability.
For the hard drive, you can order solid-state but they are traditionally slower and much more expensive than the standard embedded controller hard drives. I would only recommend using it if you are going to put the system where it will experience vibration. If you do choose to use a SS HDD, you can have it installed through NI's Factory Installation Service (FIS).

Similar Messages

  • I need estimated MTBF for a list of products

    Our system customer has requested estimated MTBF.  I therefore need estimated MTBF for the following PXI products: PXI-6509, PXI-6542, PXI-7813R, PXI-8431/8, PXI-8432/4, PXI-7953R.
    It is understood these are estimates only and will be accompanied by disclaimers.  A full 217 calculation is not expected or desired.  I have obtained estimates from other system component suppliers.  How do I obtain these estimates from NI?
    Solved!
    Go to Solution.

    I did some more research on how our MTBF figures are calculated. Generally, including this card, our MTBFs are calculated based on design, not field fail rates. The method used to determine this particular card's MTBF was Bellcore Issue Level II. 
    Dominik E.
    Applications Engineer
    National Instruments

  • Poor PXI IO performanc​e on Latitude E6410 with ExpressCar​d 8360

    Hello,
    I have a Dell Latitude E6410 with a Core-i5 M520 which is giving me very poor io performance when using an ExpressCard 8360 card to connect to a PXI Rack.
    The sustained IO rate that I can get appears to be about 1/3 of that that I can get using the same ExpressCard on a Dell Latitude E6400 (with a Core2Duo processor).
    I am using the A05 bios (latest at time of writing) on the E6410.
    Wade.

    I am running Windows XP (32 bit) sp3 in both cases.
    The E6410 has 4GByte of memory fitted.
    The E6400 has 2GByte of memory fitted.
    I have also use the same ExpressCard 8360 via a PXIe to ExpressCard Adaptor in a Desktop machine with similar performance figures to the E6400 - i.e. much better than the E6410.
    The Desktop Machine is an HP Compaq D7900 with 4GByte of memory, Core2Duo E8500 also running Windows XP sp3 (32 bit).
    Also, on the Desktop, I am running NI PXI Platform Services 2.3.2 and NI-Visa runtime version 4.3.
    On the E6410, I am running NI PXI Platform Services 2.5.2 and NI-Visa runtime version 4.6.
    I no longer have access to the E6400 so I am not sure what sofware versions were installed. However, they are unlikely to be new than the versions installed on the E6410.
    Wade.

  • Choosing a PXIe controller for streaming 200 MBps

    Warning:  This is a long post with several questions.  My appologies in advance.
    I am a physics professor at a small liberal-arts college, and will be replacing a very old multi-channel analyzer for doing basic gamma-ray spectroscopy.  I would like to get a complete PXI system for maximum flexability.  Hopefully this configuration could be used for a lot of other experiments such as pulsed NMR.  But the most demanding role of the equipment would be gamma-ray spectroscopy, so I'll focus on that.
    For this, I will need to be measuring either the maximum height of an electrical pulse, or (more often) the integrated voltage of the pulse.  Pulses are typically 500 ns wide (at half maximum), and between roughly 2-200 mV without a preamp and up to 10V after the preamp.  With the PXI-5122 I don't think I'll need a preamp (better timing information and simpler pedagogy).  A 100 MHz sampling rate would give me at least 50 samples over the main portion of the peak, and about 300 samples over the entire range of integration.  This should be plenty if not a bit of overkill.
    My main questions are related to finding a long-term solution, and keeping up with the high data rate.  I'm mostly convinced that I want the NI PXIe-5122 digitizer board, and the cheapest (8-slot) PXIe chassis.  But I don't know what controller to use, or software environment (LabView / LabWindows / homebrew C++).  This system will likely run about $15,000, which is more than my department's yearly budget.  I have special funds to accomplish this now, but I want to minimize any future expenses in maintenance and updates.
    The pulses to be measured arrive at random intervals, so performance will be best when I can still measure the heights or areas of pulses arriving in short succession.  Obviously if two pulses overlap, I have to get clever and probably ignore them both.  But I want to minimize dead time - the time after one pulse arrives that I become receptive to the next one.  Dead times of less than 2 or 3 microseconds would be nice.
    I can imagine two general approaches.  One is to trigger on a pulse and have about a 3 us (or longer) readout window.  There could be a little bit of pileup inspection to tell if I happen to be seeing the beginning of a second pulse after the one responsible for the trigger.  Then I probably have to wait for some kind of re-arming time of the digitizer before it's ready to trigger on another pulse.  Hopefully this time is short, 1 or 2 us.  Is it?  I don't see this in the spec sheet unless it's equivalent to minimum holdoff (2 us).  For experiments with low rates of pulses, this seems like the easiest approach.
    The other possibility is to stream data to the host computer, and somehow process the data as it rolls in.  For high rate experiments, this would be a better mode of operation if the computer can keep up.  For several minutes of continuous data collection, I cannot rely on buffering the entire sample in memory.  I could stream to a RAID, but it's too expensive and I want to get feedback in real time as pulses are collected.
    With this in mind, what would you recommend for a controller?  The three choices that seem most reasonable to me are getting an embedded controller running Windows (or Linux?), an embedded controller running Labview real-time OS, or a fast interface card like the PCIe8371 and a powerful desktop PC.  If all options are workable, which one would give me the lowest cost of upgrades over the next decade or so?  I like the idea of a real-time embedded controller because I believe any run-of-the-mill desktop PC (whatever IT gives us) could connect and run the user interface including data display and higher-level analysis.  Is that correct?  But I am unsure of the life-span of an embedded controller, and am a little wary of the increased cost and need for periodic updates.  How are real-time OS upgrades handled?  Are they necessary?  Real-time sounds nice and all that, but in reality I do not need to process the data stream in a real-time environment.  It's just the computer and the digitizer board (not a control system), and both should buffer data very nicely.  Is there a raw performance difference between the two OSes available for embedded controllers?
    As for live processing of the streaming data, is this even possible?  I'm not thinking very precisely about this (would really have to just try and find out), but it seems like it could possibly work on a a 2 GHz dual-core system.  It would have to handle 200 MBps, but the data processing is extremely simple.  For example one thread could mark the beginnings and ends of pulses, and do simple pile-up inspection.  Another thread could integrate the pulses (no curve fitting or interpolation necessary, just simple addition) and store results in a table or list.  Naievely, I'd have not quite 20 clock cycles per sample.  It would be tight.  Maybe just getting the data into the CPU cache is prohibitively slow.  I'm not really even knowledgeable enough to make a reasonable guess.  If it were possible, I would imagine that I would need to code it in LabWindows CVI and not LabView.  That's not a big problem, but does anyone else have a good read on this?  I have experience with C/C++, and some with LabView, but not LabWindows (yet).
    What are my options if this system doesn't work out?  The return policy is somewhat unfriendly, as 30 days may pass quickly as I struggle with the system while teaching full time.  I'll have some student help and eventually a few long days over the summer.  An alternative system could be built around XIA's Pixie-4 digitizer, which should mostly just work out of the box.  I prefer somewhat the NI PXI-5122 solution because it's cheaper, better performance, has much more flexability, and suffers less from vendor lock-in.  XIA's software is proprietary and very costly.  If support ends or XIA gets bought out, I could be left with yet another legacy system.  Bad.
    The Pixie-4 does the peak detection and integration in hardware (FPGAs I think) so computing requirements are minimal.  But again I prefer the flexibility of the NI digitizers.  I would, however, be very interested if data from something as fast as the 5122 could be streamed into an FPGA-based DSP module.  I haven't been able to find such a module yet.  Any suggestions?
    Otherwise, am I on the right track in general on this kind of system, or badly mistaken about some issue?  Just want some reassurance before taking the plunge.

    drnikitin,
    The reason you did not find the spec for the rearm time for
    the 5133 is because the USB-5133 is not capable of multi-record acquisition.  The rearm time is a spec for the reference
    trigger, and that trigger is used when fetching the next record.  So every time you want to do another fetch
    you will have to stop and restart your task. 
    To grab a lot of data increase your minimum record size.  Keep in mind that you have 4MB of on board
    memory per channel. 
    Since you will only be able to fetch 1 record at a time,
    there really isn’t a way to use streaming. 
    When you call fetch, it will transfer the amount of data you specify to
    PC memory through the USB port (up to 12 MB/s for USB 2.0 – Idealy).
    Topher C,
    We do have a Digitizer that has onboard signal processing
    (OSP), which would be quicker than performing post processing.  It is
    the NI 5142
    and can perform the following signal
    processing functions.  It is
    essentially a 5122 but with built in OSP. 
    It may be a little out of your price range, but it may be worth a
    look. 
    For more
    information on streaming take a look at these two links (if you havn’t
    already). 
    High-Speed
    Data Streaming: Programming and Benchmarks
    Streaming Options for PXI
    Express
    When dealing with different LabVIEW versions
    it is important to note that previous versions will be compatible with new
    versions; such as going from 8.0 to 8.5. 
    Keep in mind that if you go too far back then LabVIEW may complain, but
    you still may be able to run your VI.  If
    you have a newer version going to an older version then we do have options in
    LabVIEW to save your VI for older versions. 
    It’s usually just 1 version back, but in LabVIEW 8.5 you can save for
    LabVIEW 8.2 and 8.0.
    ESD,
    Here is the link
    I was referring to earlier about DMA transfers.  DMA is actually done every time you call a
    fetch or read function in LabVIEW or CVI (through NI-SCOPE). 
    Topher C and ESD,
    LabVIEW is a combination of a compiled
    language and an interpreted language. 
    Whenever you make a change to the block diagram LabVIEW compiles
    itself.  This way when you hit run, it is
    ready to execute.  During execution LabVIEW
    uses the run-time engine to reference shared libraries (such as dll’s).  Take a look at this DevZone article about
    how LabVIEW compiles it’s block diagram (user code). 
    I hope all of this information helps!
    Ryan N
    National Instruments
    Application Engineer
    ni.com/support

  • Convert PXIe-8135 controller to dual-boot Windows 7 and LabVIEW RT

    Hello. I have a PXIe-8135 controller that originally was just running Windows 7. We are trying to convert it to a dual boot system to also run LabView Real Time. (There is host computer that will run LabVIEW 2014 with the RT module, and the controller will become a target).
    I have created a FAT32 partition on the hard drive of the controller. Now, I’m trying to install the real-time OS with a USB flash drive made using the MAX utility, but I cannot boot using the USB drive for some reason. I keep getting the message “waiting for USB device to initialize”.  
    In BIOS, legacy USB support is [ENABLED] and boot configuration is set to [Windows/other OS]. I’ve tried removing the drive, waiting, and reinserting. I’ve tried two different USB drives (both 8 GB, different brands).
    I’m not sure what to do next. Apart from the USB boot issue, is converting the PXIe-8135 even possible?  I read about SATA/PATA hard drive issues with older controllers, but I don't know about this one.
    Thanks, in advance, for your help!
    -Jeff
    Solved!
    Go to Solution.

    Per Siana's licensing comment, more information on purchasing a deployment license if you do not have one for this target can be found here.
    The RT Utility USB key is used to set up non-NI hardware with LabVIEW Real-Time, but you should not need it in this situation to convert to dual-boot (*). Try this:
    1. Since you already have a FAT32 partion created, go into BIOS setup and change to booting 'LabVIEW RT'.
    2. The system will attempt to boot LabVIEW RT, see that the partition is empty, and switch over into LabVIEW RT Safe Mode. (this safemode is built into the firmware, which is why you don't really need the USB key).
    3. The system should come up correctly and be detectable from MAX, and you can proceed with installing software.
    4. To switch back to Windows, go back to BIOS setup and choose 'Windows/Other OS'
    (*) One area where the USB key is helpful on a dual boot system is with formatting the disk. If you want to convert from FAT32 to Reliance on the partition designated for LabVIEW RT, the USB key lets you attempt to format a single parition and leave the rest of the disk untouched. If you format from MAX, the standard behavior is to format only one RT partition if found, but if not found, it will format the entire disk.  Formatting from MAX on a dual boot system is consequently riskier and you could lose your Windows partition.

  • Start and Stop Trigger using PXI-6120 and DigitalSta​rtAndStopT​rigger.vi not working :-(

    Hello,
    I've been trying for a while now to get my PXI unit to capture a waveform between a Start and Stop (Reference) Trigger using the NI example DigitalStartAndStopTrigger.vi downloaded from the NI website. However, whilst the start trigger and stop trigger seem to be working i.e. the VI runs and stops at  the correct times there is never any data read from my DAQmx compatible PXI-6120 card. So I can see the VI is running around the aquisition loop but the Property Node AvailSampPerChan is always returning zero... this has me slightly puzzled. I thought this might just be a driver issue so I've updated my box to the following software versions (see below) and installed the latest drivers e.g. DCDNov07.exe (also from the NI site) but nothing has changed.
    my software as of now.
    Labview 7.1 (with the 7.1.1 upgrade applied)
    Max 4.3.0.49152
    DAQmx 8.6.0f12
    Trad DAQ 7.4.4f7
    before I updated I had the same problem but with the following versions:
    Labview 7.1 (with the 7.1.1 upgrade applied)
    Max 4.2.1.3001
    DAQmx 8.5.0f5
    Trad DAQ 6.9.3f4
    So to cut a long story short I still have the same problem with the triggers... does anybody have any ideas what is going wrong?
    To add insult to injury it the traditional DAQ example ai_start-stop_d-trig.vi was almost working correctly before I did the upgrade. It had the strange behaviour of capturing the AI0 channel but on the wrong edges (e.g. if I set Start on Rise and Stop on Fall it would do the opposite, Start on Fall and Stop on Rise).
    I'm going to leave my box doing a mass compile over night but i'd really like it if someone could suggest a solution or point me in the right direction.
    Many thanks,
    Mike

    Hi Graham
    I'm out of the lab today but I'll try and answer your questions as best I can...
    1) What are the values you have set for Buffer size, Rate, samples per read and post trigger Samples?
    At the moment I have all the values (e.g. sample rate, buffer size etc) unchanged apart from the ones I mentioned in my previous post (see above). I have in the past played around with changing the buffer sizes and rates in the example VI but as this appeared to have no effect on the behaviour I now have them setup as in the download.
    2) Does the program end after the stop trigger is implemented?
    Yep, if I toggle the trigger line high then low I see the program exits the read loop and the VI stops running as expected.
    3) Lastly can you give me the details of triggering method. Are you
    using a digital train of users set digital pulses? how long is the
    program running.I'm using the WriteDigChan.vi to manually toggle the first digital line of the PXI-6733 card which is wired directly to PFI0 of the PXI-6120 card. Generally, I just start the VI running  and then toggle the line high, wait a couple of seconds and then toggle it low.
    To me it all looks like it should be acquiring samples but as I said yesterday it just refuses to fill the buffer with any data (and hence no samples are read).
    Any ideas? and thanks for you help,
    Mike

  • Trouble capturing waveform from PXI-4472

    I'm really a very green newbie at this stuff, so bear with me...
    I've got a PXI-4472 data acquisition board and a PXI-5411 waveform generator. I've connected the arbitrary out of the 5411 to the channel 0 in on the 4472. An external oscilloscope shows a 1v-amplitude sine wave being generated.
    I created a very simple VI to show what the 4472 is capturing. It connects a NI-DAQ channel I generated to the standard "AI Acquire Waveform.vi", then out to a Waveform Chart, all within a while loop with a Stop button. Problem is, all the waveform chart seems to be showing is the running average of the waveform instead of the form itself (solid line, a tad above zero).
    I can hook the 4472 input channel up to a DC-out power supply, a
    nd when I vary the voltage, the waveform chart changes as well.
    So my question (whew!): What's wrong here that's not allowing me to capture a waveform from the 4472 (in turn from the 5411) and display it on my waveform chart?
    Thanks in advance for the help.

    Never mind.... it was a sample rate problem. I upped the sample rate and it came out ok.

  • Triggerring PXI-4110 to measure 1 current value while HSDIO PXI-6552 generating waveform

    Hi,
    Some question about PXI-4110 to measure current while PXI-6552 is generating the waveform. 
    1. Let say, I need to measure 3 points of current values, i.e. while PXI-6552 is generating sample-1000, 2000 and 3500. On the edge of sample 1000,2000 and 3500, the PXI-6552 will send a pulse via PFI line or via PXI backplane trigger line. My question is, is it possible to trigger PXI-4110 (hardware trigger or software trigger) to measure current values at these points ?
    2. Let say I need to measure the current on 0ms (start of waveform generation by PXI-6552) , 1ms, 2ms, 3ms, 4ms... and so on for 1000 points of measurement, code diagram as shown at the figure below. It is possible for the VI "niDCPower Measure Multiple" to measure exactly at 1ms, 2ms, 3ms .. ? How much time will have to spend to complete acquire 1 point of measurement by "niDCPower Measure Multiple" ?
    Thanks for viewing this post. Your advice on hardware used or software method is much appreciated. Thanks in advance.  
    Message Edited by engwei on 02-02-2009 04:24 AM
    Attachments:
    [email protected] ‏46 KB

    Hi engwei,
    1. Unfortunately, the 4110 does not support hardware triggering. Therefore you cannot implement direct triggering through the backplane or anything like that. However, there are a couple of possible workarounds you can try:
    a) Use software triggering: Say your 6552 is generating in one while loop, and your 4110 is to measure in another while loop. You can use a software syncrhonization method like notifiers to send a notification to your 4110 loop when your 6552 has generated the desired sample. This method, however, will not be very deterministic because the delay between the trigger and the response depends on your processor speed and load. Therefore, if you have other applications running in the background (like antivirus) it will increase the delay.
    b) Use hardware triggering on another device: If you have another device that supports hardware triggering (like maybe an M-series multifunction DAQ module), you can configure this device to be triggered by a signal from the 6552, perform a very quick task (like a very short finite acquisition) then immediately execute the DCPower VI to perform the measurement. The trigger can be configured to be re-triggerable for multiple usage. This will most likely have a smaller time delay then the first option, but there will still be a delay (the time it takes to perform the short finite acquisiton on the M-series). Please refer to the attached screenshot for an idea of how to implement this.
    2. To make your 4110 measure at specific time intervals, you can use one of the methods discussed above. As for how long it will take to acquire 1 measurement point, you may find this link helpful: http://zone.ni.com/devzone/cda/tut/p/id/7034
    This article is meant for the PXI-4130 but the 4110 has the same maximum sampling rate (3 kHz) and so the section discussing the speed should apply for both devices.
    Under the Software Measurement Rate section, it is stated that the default behavior of the VI is to take an average of 10 samples. This corresponds to a maximum sampling rate of 300 samples/second. However, if you configure it to not do averaging (take only 1 sample) then the maximum rate of 3000 samples/second can be achieved.
    It is also important to note that your program can only achieve this maximum sampling rate if your software loop takes less time to execute than the actual physical sampling. For example, if you want to sample at 3000 samples/second, that means that taking one sample takes 1/3000 seconds or 333 microseconds. If you software execution time is less than 333 microseconds, then you can achieve this maximum rate (because the speed is limited by the hardware, not the software). However, if your software takes more than 333 microseconds to execute, then the software loop time will define the maximum sampling rate you can get, which will be lower than 3000 samples/second.
    I hope this answers your question.
    Best regards,
    Vern Yew
    Applications Engineer, NI ASEAN
    Best regards,
    Vern Yew
    Applications Engineer
    Attachments:
    untitled.JPG ‏18 KB

  • Problems performing offset null and shunt calibration in NI PXI-4220

    I am using a 350 ohm strain gage for the measurements, i have already create a task in MAX, when i want to perform offset null in the task, the program shows a waiting bar and the leds in the 4220 board start to tilting, but when the waiting bar stops, MAX gets blocked. it has been impossible to me to perform the offset null, what can i try?.
    which will be the correct values for the parameters beside the gage parameters for the strain measures?

    Hello,
    Thank you for contacting National Instruments.
    Usually when this problem occurs, it is do to incorrect task configuration or incorrectly matched quarter bridge completion resistor. Ensure that you have the correct Strain Configuration chosen. The default is Full Bridge I. If you only have a single strain gauge in your configuration, you will need to change your configuration. Also ensure that if your are using a quarter bridge completion resistor make sure that it is 350Ohm not 120Ohm. If the resistor if 120Ohm you will more thank likely not be able to null your bridge.
    Please see the PXI-4220 User Manual for more information about your configuration and signal connections: http://digital.ni.com/manuals.nsf/websearch/F93CCA9A0B4BA19B86256D60
    0066CD03?OpenDocument&node=132100_US
    Also, you can download and install the latest NI-DAQ 7.2 driver: http://digital.ni.com/softlib.nsf/websearch/50F76C287F531AA786256E7500634BE3?opendocument&node=132070_US
    This 7.2 driver has a signal connections tab displayed when configuring your DAQmx Task which show you how to correctly connect your signals.
    Regards,
    Bill B
    Applications Engineer
    National Instruments

  • Memory upgrade on PXI-8105 and PXI-8106 controller​s

    Hi,
    I've recently upgraded the memory of three PXIs; one with a PXI-8105 controller and two with PXI-8106 controllers.  Both the 8105 and 8105 can take a maximum of 4GB (2x2GB) of DDR2-677 (PC2-5300) memory (see links below).  However, on all three systems, both the BIOS and the O/S only see 3.3GB.  Any idea why this might be the case?
    I've tried flashing the BIOS (v1.4 on both PXIs), but with no success.
    We're using COTS memory (i.e. not bought from NI) but I'd be hard pushed to believe that that is the cause of the problem.
    Thanks.
    Links;
    Max memory capacity of PXI-8105  http://sine.ni.com/nips/cds/view/p/lang/en/nid/202​630
    Max memory capacity of PXI-8106  http://sine.ni.com/nips/cds/view/p/lang/en/nid/203​442
    BIOS upgrade page: http://digital.ni.com/public.nsf/allkb/9C9362590B0​5CD6E86256B270082164A
    Solved!
    Go to Solution.

    Are these controllers running Windows? If so, this is normal expected behavior. I think it might have something to do with the fact that Windows reserves the rest of the memory for driver addressing (could be totally wrong there). The same thing happens on my Dell desktop PC with 4GB of memory in it.
    Jarrod S.
    National Instruments

  • How can I get the Conditoned output from PXI 1520 in PXI 1011 combined Chassie?

    Respected Sir,
    I am using PXI-SCXI combined Chassie PXI 1011 for my application. I have placed three SCXI 1520 modules, a motion card PXI 7352 and PXI 6052E DAQ card in the combined chassie. You know the PXI 1520 and PXI 6052E are connected internally using the Backplane of the SCXI and is not user accessable. Now I need the conditioned output of the PXI 1520 to be used as an Analog input for the Motion Control card PXI 7352. How can I do that? Whether PXI 1180 could solve my problem? If so, how do I connect the PXI 1180 to PXI 1011?
    Kindly clarify me as soon as possible.
    Thanking you,
    Ramkumar. D

    Dear Sir,
    I have already placed my DAQ card at the correct place and configured it. I need some more clarification from you. I have attached my Query in .txt format.
    Kindly reply as soon as possible.
    Thanks,
    Ramkumar. D
    Attachments:
    Clarification.txt ‏2 KB

  • How to connect a Compact RIO to a PXI System

    Hello,
    I want to connect my cRIO-9074 with my PXI System.
    First of all my Hardware configuration:
    1. "PXI-1036" Chassis with "PXI-8101" Embedded Controller and "PXI-8231" Ethernet
    2. "cRIO 9074" Integrated 400 MHz Realt-Time Controller
    3. "NI 9144" EtherCAT slave chassis
    The PXI System is connected with my Network through the Ethernet-Port on the "PXI-8101" Controller. The "NI 9144" is connected with the "cRIO 9074" through their EtherCAT Ports. That works without a problem.
    Now I want to connect my "cRIO-9074" with my PXI-System through the Ethernet Port on the "PXI-8321", but I can't find a way to get this work.
    In MAX I configured the second Ethernet Port on the PXI as "TCP/IP" with a IP and Subnet. Then I connected the cRIO and turned it on.
    But I can't find the cRIO anywhere in MAX or in a LabView Realtime Project.
    When I configure the "PXI-8321" as a EtherCAT Port, I can connect my "NI 9144" with it and use it in a LabView Project. But configured as Ethernet and connected with the "cRIO 9074" it doesn't works.
    Is there any way to get this working? Or is this not possible with my Hardware?
    I know I could connect the cRIO and PXI through a Switch with my Network and then use both in a LabView Real-Time Project. But I want to build a mobile Measurement-Station with as few devices as possible. If there is no other way I will use a Switch, but without would be better.
    Thankful
    Daniel Löffler

    Hi Daniel,
    it is not possible to connect cRIO-9074 Controller with the PXI System through the Ethernet-Port, because the RT cRIO Controller can not behave as a slave. cRIO Controller is always configured as a master, never as slave. The extension Chassis 9144 is configured as a slave, so you can use it with your cRIO Ctrl or with PXI Ctrl as well. Connection PXI-Ctrl, cRIO-Ctrl, NI 9144 is not possible.
    Best regards,
    ENIA
    NI Germany

  • PXI crate crashing regularly

    We have a PXI crate with a MXI link to a PC running labview 6.1 (can't run 7.0 as we are using DCOM to talk to though from a VMS system). This PXI crate is full, there is the MXI card, 4 PXI-7344 to drive 15 motors and 3 PXI-6508 Digital I/O to read 10 absolute encoders. The motors that need to be driven are separated in two categories : 5 are running translation stages, closed loop and 10 are running rotation stages, with absolute encoders (the VI then adjust the motor steppings to get as close as possible to the desired absolute value). So the PXI-cards are not initialised in the same way (fast and closed loop for one of the cards, slow and open loop for two other cards, and one card is a mix between the two). The VI runs fine but everynow and then, one of the PXI-7344 cards crashes. Although very inconvenient, I used to be able to recover the system from home or from my office by using MAX, power up reset (by the way it would be good to have the power up reset button as a VI that could just be run instead of having to run MAX), then I restarted my VI which included some initialisation of the cards. Over the last few days, when I do that, the PXI-6508 cards disappear from the MAX Devices and Interfaces. The only way out is to actually go to the kit itself, turn everything off, physically pull the cards out and restart the system. This happens randomly and I am working a 24h facility which means that I have to fix this at any time during the day and night so I would really apreciate a solution to this.
     Thanks,

    Post the KP report. Then, see
    What is a kernel panic,
    Technical Note TN2063: Understanding and Debugging Kernel Panics,
    Mac OS X Kernel Panic FAQ,
    Resolving Kernel Panics, and
    Tutorial: Avoiding and eliminating Kernel panics for more details.

  • Hi am trying to save Data into a write to measurement file vi using a NI PXI 1042Q with a real time mode but it is not working but when i run it with uploading it into the PXI it save in to the file

    Hi am trying to save Data into a write to measurement file vi using a NI PXI 1042Q and DAQ NI PXI-6229 with a real time mode but it is not working but when i run it without uploading it into the PXI it save in to the file please find attached my vi
    Attachments:
    PWMs.vi ‏130 KB

     other problem is that the channel DAQmx only works at real time mode not on stand alone vi using Labview 8.2 and Real time 8.2

  • Problems with permissions on PXI after 8.6.1 update

    My host program communicates (via TCP) to my RT program on PXI box.It can update itself by a) telling RT program to terminate, B) FTP sending a new version (using Internet toolkit), and C) rebooting the RT (using RTSystemReplication library).This worked fine on LV 8.2.Today I updated everything to 8.6.1 and NIDAQmx 8.9.The update would not work: the host got Error 15550 - could not open file (550).Using other FTP tools, I found that I could not delete the RTEXE file on the PXI from there, either. Even another computer wouldn't do it - error 550.Permissions on the containing folder are 755 - owner and group may NOT write.I can rename the folder it's in, but I may not delete that folder, or the file itself.Again, I see the same situation from other FTP tools.I see no method in MAX to change the settings for FTP purposes...What do I do?
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

    FWIW, NI Tech Support has responded with a reference to this doc:
    http://digital.ni.com/public.nsf/allkb/6EDCBF0F0498814F86256B570080DD20?OpenDocument
    The doc basically says to rename the file before you delete it.  That works, for some reason. 
    Message Edited by CoastalMaineBird on 04-01-2009 11:25 AM
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

Maybe you are looking for