FPGA CRIO LabView - Why No Signal or Low signal?
I'm programming LabView 8.2/RealTime 8.2 on a CRIO-9002 equipt with a 9102 chassis. I have a some 4-20mA modules, +-10v modules and last but MOST IMPORTANTLY a 9233 +-5v module.
I've programmed the FPGA and realtime and have been able to acquire and scale appropriatly for all modules except the 9233. The 9233 is hooked up to a VibraMetrics 7002 accerlometer. (http://www.vibrametrics.com/downloads/specsheets/Model%207002.pdf)
When run and an acceleration is applied I only see very low voltage on my display. Why is my 9233 is being unresponsive?
Test run at 3.5 Gvms of acceleration: (please excuse the mess)
Once again, the power spectrum should not be showing levels so low.
Thanks for your help in advance,
Craig
Have you checked the signal level from the accelerometer with any other measurement devices to verify that the 9233 is measuring incorrectly? Is the voltage/current excitation for the accelerometer correct? I am assuming you are using the binary to nominal VI for converting the binary data to the real voltage. Make sure you are using the correct module type for the conversion.
I am attempting to use the binary to nominal VI on the values I see on your front panel, but things are not adding up. I used the 4th LSB Weight and Offset (656368 and -6548405)and the 4th 9233 unscaled Binary value (-7261) and converted it with the Binary to Nominal VI for the 9233 and got 0.00178252 for a nominal value while your VI appears to display 0.0067847. I'm not sure what you are doing different than me.
The low valued power spectrum makes sense because the analog input values are low. If we fix the analog input issue, the spectrum should take care of itself.
Without seeing your code or setup, these suggestions are obviously hypothetical. Your screenshots show the FPGA code ok, but we have no idea how you are handling data on the host side. In the future I would avoid embedding giant screen shots in your post, attachments are just fine. Embedded screen shots are more effective if they only contain a small snippet of code.
Cheers,
Spex
National Instruments
To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be...
Similar Messages
-
Why does signal strength vary on my Westell 327W?
My wife uses her laptop about 40 feet away from our router, in our house (conventional wood construction, no wireless devices or wireless phones in use in the house, in the suburbs).
Her signal strength varies during the day from low to very good--sometimes going from 1 to the other in one minute. I'm in the 'burbs, and there are not a lot of houses near mine. Can someone tell me why her signal strength should vary so much, and what I can do about it?No, hmm...
#1
MrCurious2 wrote:
with nothing else going on in the vicinity.
Not even other radios (Like police officers, fire fighters, medial personal) in the area in use?
#2 What Firmware is it running?
If you have no idea of what I mean, take a screen shot of least the first screen that you see in the router - and post it.
For Windows, this means:
Pressing the "Print Screen" key on your keyboard, going to paint (Start--->All Programs--->Accesories--->Paint), pressing CTRL and P at the same time, saving the file as a JPEG (use "save as type" as need be), uploading it some where (I use tinypic.com - unless a message board allows uploading the image directly), and giving out the URL of the image..
^^
If you are the original poster (OP) and your issue is solved, please remember to click the "Solution?" button so that others can more easily find it. If anyone has been helpful to you, please show your appreciation by clicking the "Kudos" button. -
Myself is an Electronics Engineer. But I don't have any exposure to any comp language like C or whatever. I do not have any exposure to Labview as well. I've been involved in design and developing Digital Circuits like interlock units, Position monitoring and display systems etc. Now I want to work upon FPGA based circuits. Can anyone suggest me plz that how to start with?
Hi BNarendra;
Here are some resources that can be useful, the first is an introduction to LabVIEW and the other ones are for LabVIEW FPGA.
LabVIEW
http://zone.ni.com/devzone/cda/tut/p/id/5247
LabVIEW FPGA
http://www.ni.com/swf/presentation/us/labview/lvfpga/default.htm
http://digital.ni.com/public.nsf/allkb/0697A6F4BFC6E152862570FA0072153A?OpenDocument
http://zone.ni.com/devzone/cda/tut/p/id/3261
I hope the information is useful.
Good Luck!
Francisco Arellano
National Instruments Mexico
Field Systems Engineer - Energy Segment
www.ni.com/soporte -
LabVIEW caught fatal signal - Segmentation fault
Hi,
I've just obtained the Linux version of LabVIEW 7.0 from my University for academic use on my home PC, and I am having some difficulty getting the program to run. I've installed all of the "labview70-*.rpm" files into "/opt" using the "bin/INSTALL.norpm" from the CD (I am running Gentoo and don't have RPM support).
When I try to run the program (cd /opt/lv70, ./labview), I get the splash screen, which says "Init temp resources file" in the bottom right corner. This stays on screen for about a minute, during which time, my RAM usage gradually goes very high, followed by my SWAP usage when the available RAM runs out (256MB RAM, 500MB SWAP). This happens regardless of whether I'm logged in as a normal user or as r
oot.
The splash screen eventually disappears and the following message is printed at the terminal:
LabVIEW caught fatal signal
7.0 - Received SIGSEGV
Reason: address not mapped to object
Attempt to reference address: 0x70126d74
Segmentation fault
I have checked the FAQs and knowledge base but can't seem to find anything that sounds similar to what is happening on my machine.
My system is as follows:
Athlon 1800+ XP
256MB RAM
Gentoo Linux 2.4.22-r2
500MB SWAP partition
24GB free space on hard disk
Cheers,
MartinHey Martin,
This is a new one for me. Try replaying your INSTALL and bin/INSTALL.norpm files with the one's that I've attached. They work well on my Debian machine (no rpm). That way you can run "./INSTALL" instead of INSTALL.norpm (which was only meant to be a helper app, not a primary method for installation.)
As you probably know, reproducing a gentoo issue may be difficult because of the extreme customization. Is there anything uncommon that you might be doing? (using a locale other than C or POSIX, using a different threading library, etc.)
Duffey
Attachments:
INSTALL 22 KB
INSTALL.norpm 2 KB -
LabVIEW blocking QT signals?
I have a LabVIEW 8.6 program that is using a DLL written in QT; the DLL listens to a TCP port for incoming messages and updates some internal data. My LabVIEW program calls into the DLL occasionally to read the internal data. The DLL works perfectly (i.e., receives data from the TCP port) with another QT program. However, it does not work at all with my LabVIEW program.
I've attached a debugger to the DLL and can see calls from LabVIEW going into it -- my function for getting the internal data is being called and I can step through it. The code that gets the data from the TCP is never called though; it looks like the signal for incoming data on the TCP port is never triggered.
I know this sounds like a QT issue but the DLL works perfectly with another QT program. Unfortunately, it fails miserably with LabVIEW.
One theory:
- The event loop is not running when LabVIEW calls the DLL
- In the QT DLL's run() function, I call socket->waitForDisconnected(). Perhaps the DLL is not processing incoming events because the event loop is not running? If I call exec() to start the event loop, LabVIEW crashes (LabVIEW 8.6 Development System has encountered a problem and needs to close."):
AppName: labview.exe AppVer: 8.6.0.4001 ModName: qtcored4.dll
ModVer: 4.5.1.0 Offset: 001af21a
- Perhaps when I call the DLL from another QT program, that program's event loop is allowing for the TCP signal to be seen by the DLL. Unfortunately, kicking off the event loop in the DLL takes down LabVIEW.
Any thoughts on how to keep signals running in the DLL when LabVIEW is the calling program?
(Cross-posted on http://stackoverflow.com/questions/1267804/labview-blocking-qt-signals)Hi,
calling a dll in LabVIEW can be tricky. Qt is based on C++ and LabVIEW do not like C++. A dll written in C++ needs to be redefined in how calling conventions must be. There is some App. notes you should read. Please search for them here or at lavag.org.
Regards. -
I am running the LabVIEW HTTP-server on a Linux server. When the traffic is to high LabVIEW crashes and I get the following error:
LabVIEW caught fatal signal
6.0.2Received SIGSEGV
Reason: address not mapped to object
Attempt to reference address: 0x0
See LabVIEW_Failure_Log.txt for more information.
The LabVIEW_Failure_Log.txt looks like this:
#Date: Mon, Feb 18, 2002 02:48:30 PM
#Desc: LabVIEW caught fatal signal
6.0.2Received SIGSEGV
Reason: address not mapped to object
Attempt to reference address: 0x0
#RCS: unspecified
#OSName: Linux
#OSVers: 2.4.16
#AppName: labview
#AppKind: FDS
0x40257840 - __libc_sigaction() + 0x1f0
0x083f2fbc - Exec() + 0x280
0x082a7fb5 - ExecWrapper__FPv() + 0x41
0x4021aeca - pthread
_detach() + 0x562
0x402fe3ca - __clone() + 0x3a
Anyone knows what this means and how this can be fixed?I am getting a similar error with LV70 Linux when
trying to start it in a WRQ window. Is there
an answer to the question?
LabVIEW caught fatal signal
- Received SIGSEGV
Reason: address not mapped to object
Attempt to reference address: 0x50f
Segmentation fault
thanks,
Dave
Fermilab -
Why wifi signal of my iphone 4 is worst than a simple nokia?
why wifi signal of my iphone 4 is worst than a simple nokia?
I suspect you might be accidentally ending the call with your face. Might want to check out the 110 page proximity sensor thread.
-
Why are my speeds lower then my brothers bt line w...
my brother has had bt for months and his speeds range from 12 to 20mbits and he lives the other side off the town , now i signed up for the exact same unlimited broadband package from bt and my speeds are barley topping 8mbits , i expected my speeds to be closer to 20mbits as im alot closer to the exchange in my area ..... why is my speeds lower then my brothers ,
we are both on non fibre unlimited copper broadbandwelcome to the BT community forum where customers help customers and only BT employees are the forum mods
in order for the forum members to help please can you post the adsl stats from your router you may need to 'show detail' to get all stats (if hub enter 192.168.1.254 in your browser and navigate to adsl or if HH4/5 then go to troubleshooting then logs and you are looking for 2 line together when hub last connected to internet and they will show your connection speed and noise margin or if netgear enter 192.168.0.1). Then run btspeedtester (MAC users may have problems). when first test completes then run diagnostic test and post the results ( do not reset the router).
can you enter your phone number and post results remember to delete number https://www.btwholesale.com/includes/adsl/main.html
are you connected directly via a filter to the NTE5 master or test socket or to somewhere else? Is the master the only phone socket in your home?
Have you tried the quiet line test? - dial 17070 option 2 - should hear nothing - best done with a corded phone. if cordless phone you may hear a 'dull hum' which is normal
Someone may then be able to offer help/assistance/suggestions to your problem
If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’. -
LabVIEW caught fatal signal, reason unknown
hi, I have installed Labview 7 on Fedora Core 5 and it rans well untill yesterday, when I start labview, after the welcome picture, nothing more.
So I typed the command ``labview" at the shell prompt. Then the shell response me:
[root@silver ~]# labview
LabVIEW caught fatal signal
7.0 - Received SIGSEGV
Reason: unknown
Attempt to reference address: 0x0
Segmentation fault
I am very puzzled, coz it worked well before yesterday. All what I do is installed the ATI Radeon driver program onto my computer.
Can any one help me?
thx
Forrest Sheng Bao, Ph.D.
Assistant Professor, Dept. of Electrical & Computer Engineering
University of Akron, Akron, OH, USA
https://sites.google.com/site/forrestbao/This problem is reproductible, my application crashs every time after
40-48h running, .
-- It looks like you have the 7.1.1 maintenance release installed and
I am assuming that you have seen this <a
href="http://digital.ni.com/public.nsf/websearch/A2D53C8E0D88380B86256EBD00..."
target="_blank">KnowledgeBase</a> about similar installation
errors. However, this KnowledgeBase also offers some steps for a
workaround.. have you tried these?--
Yes, I tried it. We have this problem at beginning and we correct it
but it is not the same one because without this realease you can't
launch Labview. And my application run without problem during 40-48h.
Have you heard about any function not supported by Labview under Linux
environment ? It may help me to go on debugging. -
Using FPGA and Labview without RIO board.
Dear Sir
I am a student. I want to connect my own FPGA and make a program on labview and then transfer to FPGA.I do not want to buy RIO.Is it possible to work
like that without purchasing RIO.
Thanks a RegardsHooovahh wrote:
Yes NI supports programming FPGAs using LabVIEW without RIO boards I've been doing it for years and it works great. Here are a few non-RIO products I've used LabVIEW FPGA with.
PCI-E Version
PXI Version
USB Version
The PCIe and PXI cards you mentioned are specifically RIO.
But you managed to program a USB-6225 with LV FPGA? I'm not even sure there is an FPGA on there to program. It is an M series DAQ board.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines -
Buenos días a todos.
Llevo un tiempo trabajando con Crio ni-9022 y un chasis que incluye FPGA , y me da unos errores cuando compila la FPGA, que me son imposible de detectar ya que nombre variables a nivel de FPGA.
El fallo es el siguiente:
LabVIEW FPGA: The compilation failed due to a xilinx error.
Details: ERROR:HDLCompiler:69 - "\NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd" Line 640: <res00000029_wo> is not declared. ERROR:HDLCompiler:192 - "\NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd" Line 640: Actual of formal out port iinitouttoreshold cannot be an expression ERROR:HDLCompiler:69 - "\NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd" Line 654: <res0000002b_wo> is not declared. ERROR:HDLCompiler:192 - "\NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd" Line 654: Actual of formal out port iinitouttoreshold cannot be an expression ERROR:HDLCompiler:854 - "\NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd" Line 82: Unit <vhdl_labview> ignored due to previous errors. VHDL file \NIFPGA\jobs\eeQj0Kk_r41egX9\NI_Munge_Pot_activa_y_aparente_SubVI_vi_colon_Clone1.vhd ignored due to errors --> Total memory usage is 208304 kilobytes Number of errors : 5 ( 0 filtered) Number of warnings : 3 ( 0 filtered) Number of infos : 0 ( 0 filtered) Process "Synthesize - XST" failed
Start Time: 17:23:45 End Time: 17:34:17 Total Time: 00:10:31,834
Gracias por adelantado
Saludos
Pablo MatataguiHave you checked the signal level from the accelerometer with any other measurement devices to verify that the 9233 is measuring incorrectly? Is the voltage/current excitation for the accelerometer correct? I am assuming you are using the binary to nominal VI for converting the binary data to the real voltage. Make sure you are using the correct module type for the conversion.
I am attempting to use the binary to nominal VI on the values I see on your front panel, but things are not adding up. I used the 4th LSB Weight and Offset (656368 and -6548405)and the 4th 9233 unscaled Binary value (-7261) and converted it with the Binary to Nominal VI for the 9233 and got 0.00178252 for a nominal value while your VI appears to display 0.0067847. I'm not sure what you are doing different than me.
The low valued power spectrum makes sense because the analog input values are low. If we fix the analog input issue, the spectrum should take care of itself.
Without seeing your code or setup, these suggestions are obviously hypothetical. Your screenshots show the FPGA code ok, but we have no idea how you are handling data on the host side. In the future I would avoid embedding giant screen shots in your post, attachments are just fine. Embedded screen shots are more effective if they only contain a small snippet of code.
Cheers,
Spex
National Instruments
To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be... -
I have just gotten my cRIO with a analog module (9201) and a digital output module (9472).
When I place an Analog Input Module on my block diagram and double click it, I can choose from the following inputs:
Channel 0 through Channel 7
AND
Chassis Temperature
I have tried to find any documentation for this "Chassis Temperature" but so far without success.
If I read this input and feed it straight to an indicator, I get values in the range of 120 to 150.. Now, without more information, this number is rather useless. Does anyone know how to transfer this data to a temperature (either in the Celcius scale or the Fahrenheit scale)?
Thank you!
Project Engineer
LabVIEW 2009
Run LabVIEW on WinXP and Vista system.
Used LabVIEW since May 2005
Certifications: CLD and CPI certified
Currently employed.I still did not find any documentation for this "feature" but I lucked out and found the following:
in the folder: \labview 7.1\examples\FPGA\CompactRIO\cRIO-910x\
you should have a file called:
cRIO-910x support files.llb
in this there is a convert to temperature vi.
To get the raw data as a temperature, it looks like all you have to do is divide the binary data by 4 and the answer will be in Celcius.
Project Engineer
LabVIEW 2009
Run LabVIEW on WinXP and Vista system.
Used LabVIEW since May 2005
Certifications: CLD and CPI certified
Currently employed. -
Hi there:
I have a problem to use cRIO 9401 as an input for RS232. The very same code works perfectly fine in cRIO 9411, but in 9401 it shifting bits. Is there any configuration should be performed or anything else...?
I will greatly appreciate any help.
Oleg FinodeyevHey Vishal,
In order to set a loop time, you should make use of the "loop timer" VI, in the first frame of a sequence structure, and then the rest of the code to execute in the second frame.
For more information on FPGA programming, I recommend that you consult our online LabVIEW 8 FPGA training materials, available here.
Also, ordinarily the 40 Mhz clock is suitable, but it is also possible to create a derived clock. For more information on that, please consult the LabVIEW FPGA Module Help file, or this page.
Best regards,
Message Edited by SamboNI on 06-13-2007 06:14 PM
-Sam F, DAQ Marketing Manager
Learn about measuring temperature
Learn how to take voltage measurements
Learn how to measure current -
Using FPGA cRio and 9853 for a J1939 CAN
We are currently trying to interface with a J1939 CAN network using a cRio 9012, 9104 chassis and the 9853 module. We have Labview 8.2. I stumbled accross the examples for Labview 8.6 and newer. Is there any particular solution/road we should be going down?
I have experience using Labview with a cDAQ, but have not much experince with RealTime and FPGA.
I got the thermocouple module to work using FPGA, so the RIO itself and the module do work.
Thanks for any guidance/help you can send my way.The way I would approach J1939 on the cRIO would depend on what features of J1939 you needed for your app. If you are just reading or writing broadcast messages that fit in a single 8 byte frame it will be just as straight forward as standard CAN. Basically in this case you would be reading frames on the FPGA code, transmit them to the realtime portion where you can convert them to channel data using a CAN database. Writing would be just the opposite direction.
If you are using requests for PGN data, lets say to retrieve DTC codes, or R/W data using advance diagnostic messages (say DM14/15/16 for example) things will get a bit more difficult but not too bad. If you are expecting to tx or rx multi-frame packets and need the transport protocol of J1939, then things can get complex fast.
DMC recently developed a set of J1939 protocol drivers for LabVIEW which are based on the NI-CAN channel API. The drivers were written in LabVIEW just like the NI frame example, but provide functions at a layer above the base CAN channel API layer. This allows the simultaneous capture of data from both J1939 packets which are simple broadcast CAN frames, or more complex transport using BAM, including extended data frames.
I suppose something like this could be written into the realtime of the cRIO, and just pass frame data back and forth using some very basic code on the FPGA. -
FPGA Crio encoder SSI protocol
Hello everyone
Does anybody work with SSI protocol encoder and Crio FPGA? I wrote a simple program that try to implement this protocol on Crio FPGA, but i obtain chaotic data. I use a rs422-TTL converter to transform encoder signal (Rs422) to TTL signal for Digital high speed NI9401. Anyone could help me? any suggestion?
Thank you
francesco
Solved!
Go to Solution.Hi Francesco,
take a look to this link.
Bye
ANdrea
Maybe you are looking for
-
Not a question but a statement. Firefox is behaving too much like Windows, i.e., it freezes, it hangs it takes ages to start, needs regular reboots and it is becoming a complete waste of time. I am sorry to say Win Explorer behaves better. What have
-
Rebuilding corrupted library after upgrade to iPhoto 6.0.5
I upgraded today to iPhoto 6.0.5 and experienced the problem that many people here have discussed -- missing photos, greyed out thumbnails and images, and so on. Just a small number of recent photos show up, out of about 15000. All the photos seem to
-
Problem in calculating the Average Daily Requirement
Hello all, I didn't understand how the system calculates the average daily requirement in Dynamic Safety Stock process. The following process flow in given in SAP notes to find how the system calculates the average daily requirement: 1. The system us
-
Showing data 6 months greater than date parameter??
I have a request to build a set of worksheets that accept an intial parameter - Start Date in the format MMM YYYY. When a user enters JAN 2009 - I want the worksheet to show data from Jan, Feb, Mar, Apr, May and June (2009). Can anyone point me in th
-
Does any recent version of Firefox support a tool like MimeEdit on version 2?
I have an internal web page with links to various files I am working on. MimeEdit on Firefox 2 enables me to customize that browser so that clicking on a link to one of my files causes it to appear in my text editor of choice. Does any later version