Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match
Hi everyone,
I faced a strange problem about performance timing between these two measurements.
Here is my test
-used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
-When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
-Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
Now my questions are
-Are the algorithms used in both tools are same? (I thought they are same)
-IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
-What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
-Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
-Is this bug or am i missing somethings or this is how it is expected?
Please find attachments below.
-Vision Assistant-v12-Build 20120605072143
-Labview-12.0f3
Thanks
uday,
Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
Certified LabVIEW Associate Developer (CLAD) Using LV13
Attachments:
Performance_test.zip 546 KB
Hmm Bruce, Thanks again for reply.
-When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
-I have added code and screenshot of Profile in both VA and LabVIEW.
In both Vision Assistant and Labview
-I am loading image only once.
Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
I do understand that, when we are capturing the image live from camera things are completely different.
-Loading template image multiple times??
This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
-Run Setup Match Pattern once?
Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
-Loading images for processing and loading diffferent template for each image?
You are completely mistaken here and i don't see that how it is related to my specific question.
Briefly explaining you again
-I open an image both in LabVIEW and VA.
-Create two pattern match steps. and Calipers(Negligible)
-The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
-I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
But i am running for only once, then Vision assistant should show that time also isn't it?
-I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
-Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
-I like the last line especially, You cannot compare the speeds of the two methods.
Please let me know if i am thinking in complete stupid way or at least some thing in right path.
Thanks
uday,
Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
Certified LabVIEW Associate Developer (CLAD) Using LV13
Similar Messages
-
How performance meter in vision assistant calculates estimation time
Hello...
I want to know that how performance meter in vision assistant calculates the estimation time of each step?????
I mean what is the concept behind this???
please i need to know...
Thanks in advance..Hello,
I have performed the same operations in Ni Vision Assistant and in LV (without OCR):
The VI is attached.
The values are very similar. Maybe I do not understand your problem correctly. Please explain, if I missed the point.
Best regards,
https://decibel.ni.com/content/blogs/kl3m3n
"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
Attachments:
time Estimation_2010.vi 22 KB
NIVISION.png 150 KB -
Rewrite template vision assistance for image match
hey guys , im using labview to detect an image and record its x-y coordinates.
does anybody know how i can automatically change , in runtime, the template used to match .
i get my template from the Vision Assistant Block
please please help
Kevin
Solved!
Go to Solution.Hey Kevin,
If you configure the Vision Assistant block and place it on your block diagram, you can then right click on it and select "Open Front Panel."
This shows you the LabVIEW code that the Express VI is running behind the scenes. From here you can find any parameters you have set, put down as constants. So for example, the file path you chose for the template is on the block diagram as a file path constant.
You can right click on this and change it to a control, and then link this control to the connector pane
Reference: http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/building_connector_pane/
After saving this VI, and closing it, you will notice some changes to your main VI. Firstly the vision assistant block will have changed colour to yellow, and secondly, it will have one more input added to the bottom (if you drag it down) which is for the control you set in the subVI.
The attached example then adds a case structure to select the file path you want to use as a template during runtime. It would be better practice to use an event structure, but hopefully this works for demonstrative purposes.
Let me know if this helps out Kevin.
Thanks,
Justin, Applications Engineer
Attachments:
Vision example.PNG 40 KB -
NI Vision Builder vs. LabVIEW RT + NI Vision Assistant
Hello
I’m designing a vision application which should be able to run stand-alone (EVS-1463RT). The app will have up to 30 inspection states (read gauge value, checking for the presence specific objects, etc.) approximately. The second requirements are the communication ability with other devices not NI via TCP/IP and logging pictures in FTP.
Now I’m thinking about the two possible solutions.
Create an AI with NI Vision Builder
Create LabVIEW RT app and with using NI Vision Assistant build inspection states
A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited. In the other hand the building the inspection is “easy”.
The second solution has following advantages for me: better control of the app, maybe better possibility to optimize the code, implementation my own TCP/IP server. My biggest concern is the using NI Vision Assistant to generate the inspection states.
In conclusion I have to say that I’m not experience in the vision app but the LV RT is no problem for me.
Thanks for any opinions
JanHi Jan,
> A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited.
Could you give more feedback on this point? What do you mean by "using of the server is limited". More precise feedback and suggestions would help us improve the functionality.
What I would recommend you look into is the use of the VBAI API. You can find examples in <Program Files>\National Instruments\<Vision Builder AI>\API Examples\LabVIEW Examples
This features allows you to run VBAI inspection within your LabVIEW application, an retrieve results that you can send using a TCP implementation of your own in LabVIEW, without having to use the VBAI TCP functionality.
You retain the configuration feature of the Vision part of the application, and can add extra code in LabVIEW.
The API functions allow to basically open an VBAI inspection file, run it synchronously or asynchonously, and retrieve images and results.
As you mentioned, the other solution is to implement your state diagram in LabVIEW and use the Vision Assistant Express VI in different states. What VBAI give you that Vision Assistant doesn't is the pass/fail limits for each step (and the state diagram).
Best regards,
Christophe -
Performance parameters of the meter reading result entry
Hi Guys,
Can any one explain me about the below parameter in the meter reading result entry.
1."No Entry of Tech. MRs at Installation Outside Installation"
If we enable this option,it should not allow to enter the meter readings in the technical installation.
But it is accepting. How? can any one explain.
2.Turbo booster also.
Thanks in advance.
Regards,
Oven
Edited by: Richard oven on Feb 18, 2009 11:41 AMHi Oven,
I hope the following information is useful:
No Entry of Tech. MRs at Installation Outside Installation ->
As a rule, technical meter readings at installation are entered using the appropriate transactions (Full Installation, Technical Installation).
In addition, it is possible to enter meter readings for technical installation before the installation occurs. This is done by uploading meter readings using IDoc ISU_MR_UPLOAD or BAPI. In exceptional cases you can also enter meter readings manually using single entry. The meter reading is then included once technical installation has been executed.
If you select this field, it is not possible to import meter readings at installation via upload or single entry.
Turbo Boosting->
Activates Accelerated Processing
Transactions and background jobs of the meter reading result
entry create a high database load. Excessive accesses to the
database tables EABL, EABLG, V_EGER_H, ETDZ, EASTS and others
affect the system performance this improves performance......
Kind Regards
Olivia -
LabVIEW Vision Assistant的图像处理,NI视觉助手教程免费阅读
附件里面比较全,因为发不了那么多字
http://shixinhua.com/ 石鑫华主页
http://shixinhua.com/imganalyse/2013/09/762.html 全文网站
《基于Vision Assistant的图像处理是实用教程》
第二章 界面与菜单 67
第一节 启动欢迎界面 67
http://shixinhua.com/imganalyse/2013/01/278.html
第二节 功能界面 73
http://shixinhua.com/imganalyse/2013/01/280.html
Acquire Images采集图像界面 73
Browse Images浏览图像界面 77
Process Images处理图像界面 79
第三节 File文件菜单 86
http://shixinhua.com/imganalyse/2013/01/283.html
Open Image:打开图像 86
Open AVI File:打开AVI文件 87
Save Image:保存图像 87
New Script:新建脚本 91
Open Script:打开脚本 92
Save Script:保存脚本 92
Save Script As:脚本另存为 93
Acquire Image:采集图像 93
Browse Images:浏览图像 93
Process Images:处理图像 94
Print Image:打印图像 94
Preferences:优先参数选择 95
Exit:退出 97
第四节 Edit编辑菜单 97
http://shixinhua.com/imganalyse/2013/01/284.html
Edit Step:编辑步骤 97
Cut:剪切 97
Copy:复制 98
Paste:粘贴 98
Delete:删除 98
第五节 View查看菜单 98
http://shixinhua.com/imganalyse/2013/01/286.html
Zoom In:放大 99
Zoom Out:缩小 99
Zoom 1:1:原始图像 99
Zoom to Fit:适合窗口 99
第六节 Image图像菜单 99
http://shixinhua.com/imganalyse/2013/01/288.html
Histogram:直方图 101
Line Profile:线剖面图 101
Measure:测量 101
3D View:三维视图 101
Brightness:亮度 102
Set Coordinate System:设置坐标系 102
Image Mask:图像掩码 102
Geometry:几何 102
Image Buffer:图像缓存 102
Get Image:获取图像 102
Image Calibration:图像标定 102
Image Correction:图像修正 102
Overlay:覆盖 102
Run LabVIEW VI:运行LabVIEW VI 103
第七节 Color彩色菜单 103
http://shixinhua.com/imganalyse/2013/01/289.html
Color Operators:彩色运算 104
Color Plane Extraction:彩色平面抽取 104
Color Threshold:彩色阈值 104
Color Classification:彩色分类 105
Color Segmentation:彩色分段 105
Color Matching:彩色匹配 105
Color Location:彩色定位 105
Color Pattern Matching:彩色模式匹配 105
第八节 Grayscale灰度菜单 105
http://shixinhua.com/imganalyse/2013/01/291.html
Lookup Table:查找表 107
Filters:滤波 107
Gray Morphology:灰度形态学 107
Gray Morphological Reconstruction:灰度形态学重建 107
FFT Filter:快速傅立叶变换滤波 107
Threshold:阈值 107
Watershed Segmentation:分水岭分割 107
Operators:运算 107
Conversion:转换 107
Quantify:量化 108
Centroid:质心 108
Detect Texture Defects:检查纹理缺陷 108
第九节 Binary二值菜单 108
http://shixinhua.com/imganalyse/2013/01/293.html
Basic Morphology:基本形态学 109
Adv. Morphology:高级形态学 109
Binary Morphological Reconstruction:二值形态学重建 109
Particle Filter:粒子滤波 109
Binary Image Inversion:二值图像反转 110
Particle Analysis:粒子分析 110
Shape Matching:形状匹配 110
Circle Detection:圆检测 110
第十节 Machine Vision机器视觉菜单 110
http://shixinhua.com/imganalyse/2013/01/295.html
Edge Detector:边缘检测 111
Find Straight Edge:查找直边 112
Adv. Straight Edge:高级直边 112
Find Circular Edge:查找圆边 112
Max Clamp:最大卡尺 112
Clamp(Rake):卡尺(耙子) 112
Pattern Matching:模式匹配 112
Geometric Matching:几何匹配 112
Contour Analysis:轮廓分析 112
Shape Detection:形状检测 112
Golden Template Comparison:极品模板比较 113
Caliper:测径器、卡尺 113
第十一节 Identification识别菜单 113
http://shixinhua.com/imganalyse/2013/01/296.html
OCR/OCV:光学字符识别 114
Particle Classification:粒子分类 114
Barcode Reader:条码读取 114
2D Barcode Reader:二维条码读取 114
第十二节 Tools工具菜单 114
http://shixinhua.com/imganalyse/2013/02/297.html
Batch Processing:批量处理 114
Performance Meter:性能测量 120
View Measurements:查看测量 120
Create LabVIEW VI:创建LabVIEW VI代码 121
Create C Code:创建C代码 121
Create .NET Code:创建.NET代码 122
Activate Vision Assistant:激活视觉助手 123
第十三节 Help帮助菜单 124
http://shixinhua.com/imganalyse/2013/02/298.html
Show Context Help:显示上下文帮助 124
Online Help:在线帮助 125
Solution Wizard:解决问题向导 125
Patents:专利 125
About Vision Assistant:关于视觉助手 125
第三章 采集图像 126
第一节 Acquire Image采集图像 127
http://shixinhua.com/imganalyse/2013/02/299.html
第二节 Acquire Image(1394,GigE,or USB)采集图像(1394、千兆网、USB) 128
http://shixinhua.com/imganalyse/2013/02/301.html
Main选项卡 129
Attributes属性选项卡 138
第三节 Acquire Image(Smart Camera)从智能相机中采集图像 143
http://shixinhua.com/imganalyse/2013/02/302.html
第四节 Simulate Acquisition仿真采集 145
http://shixinhua.com/imganalyse/2013/02/304.html
第四章 浏览图像 150Wetzer wrote:
You can think of the purple "Image" Wire as a Pointer to the Buffer containing the actual Image data.
That's what I thought too, but why would it behave differently during execution highlighting? (sorry, I don't have IMAQ, just curious )
LabVIEW Champion . Do more with less code and in less time . -
Modifying VI - Vision Assistant
I have images that consist of an area of particles and an aera of no particles. I am trying to fit a circle to the edge, between the regions where there are and are not particles. I want to use the find edge tool, and I want to find the pixel where this transistion takes place for every row of pixels. For example, I want to draw a horizontal line that will give me the location of the edge then move down one row and repeat. I have tried using the find circle edge, but since I am trying to fit a circel to an edge that isn't well defined, I need a lot more data points to average over. I figure there is a way to modify the VI to perform the process I described above. Any help would be much appreciated. I have attached the images to give you a better idea of what I'm trying to do.
Attachments:
half circle.JPG 554 KBHi Windom,
If the find circle edge is not working for you, I would suggest thresholding the image. Then you could use the morphology functions (such as Close, Fill Holes, and Erode) to further manipulate the image to get a stronger edge between the areas of particles and not particles.
You can use a For Loop (initialized to start looking at the top of picture) and have it iterate vertically down the picture with the Edge Detector. You can do that by changing the ROI Descriptor for the line you are detecting edges with, and then you can read the Edge Information out of the VI. These all need to be checked in the "Select Controls" menu, which is found at the bottom right of the Vision Assistant window.
I hope this helps, let me know if you need any further clarification.
Best Regards,
Nathan B
Applications Engineer
National Instruments
Attachments:
SelectROI.JPG 12 KB
Edge Info.JPG 10 KB
SelectControls.JPG 14 KB -
So here's what I have so far written, I have LabView program to interface with my microscope stage, autofocus. I want to try now to interface the camera using Vision Assistant.
The procedure I want the program to perform is taking this CR-39 detectors and placing them on this stage and the camera to take images of this detector frame by frame. So I want the program to go to the very top corner of the detector on the stage. Then the camera would autofocus (if needed) and take an image. So this would take an image for every micron of the detector on that row. Then move to the next and so on. Until it has successfully imaged the whole detector. Is anyone well versed on Vision Assistant and willing to give some advice on this? It would be greatly appreciated.
The purpose is eliminate the previous method of acquiring images by manually focusing on the tracks created by the particles. I want to this program to run itself without a user being present, so just pressing a button would send the program into autopilot and it would perform the procedure.
What are CR-39 detectors?
Allyl Diglycol Carbonate (ADC) is a plastic polymer. When ionizing radiation passes through the CR-39 it breaks the chemical bonds of the polymer and it creates a latent damage trail along the trajectory of the particles. These trails are further ethced using a chemical solution to make the tracts larger which can finally be seeing using an optical microscope.
Thank you,
HumanConditionHi Justin,
Yes I already have a vi that controls the stage.
So the camera is stationary above the stage. Now what I want to do is get the camera to take the pictures of the detector by moving the stage frame by frame for example starting from the top right corner of the detector (by taking an image) then ending on the top left corner of the detector (by taking an image) then having the stage move down a row and starting the process over again until each frame (or piece) has been imaged.
My goal is while the stage is moving to every little frame of the detector I would like for it to autofocus if necessary, and take a image to be for each frame of the detector.
Using the auto focus requires the moving of the stage up and down.
HumanCondition -
Upgrade issues with Vision Assistant
Hi Guys,
I have encountered a problem with finding and analysing data from vision assistant since I upgraded to the new labVIEW 2013 SP2 a couple of weeks back. Within my application I perform countless camera inspections using vision assistant however i have noticed that for 4 particular inspections my vision assistant does not perform as it should. All other inspections have worked perfectly which is quite strange. Even the scripts are the same along witht the file directory locations. The first error was -1074396120 IMAQ Match Pattern3 saying the possible error was because it was not an image etc. However, after instancing a known working script into vision assistant for that inspection i receive an image but I dont receive an output from visin assisant that I am looking for.
Within my inspection I look for 4 templates and an OCR check etc and perform boolean logic to give me a true/false result. When i run my application i always get a false result, even my probes tell me there was no template match etc but when i open up the script and run it all the templates are picked up! I created a new less complicated inspection where i only look for one pattern and again i get a false result. Has anyone got any ideas? I am a bit lost at the moment considering non of the other inspections have been affected. Even creating new file paths for directories but no joy!
Thanks in advance guys!Hi Chris,
I was away for a few weeks. I have attached an image with template. The version I currently run is Development 2013 SP1 and the Vision Development module 2013 SP1 see attachment (Doc1). The executable runs perfectly for all of my inspections which is over 100 and only 2/3 fail because it will not find the template. The template has the same file paths as all the other inspections. When running the application i receive a fail because it doesnt see a template i believe. However, when i open the vision assistant express VI and run through the script it finds the templates no problem. Any advice is welcome.
Damien
Attachments:
230.png 2220 KB
Edge Template2.png 223 KB
Doc1.docx 115 KB -
How do you track a moving object using Labview and Vision Assistant
I am using Vision and Labview to create a program that tracks and follow a moving object using a high end camera. Basically what it does is it detects a foreign object and locks on to it and follows it where ever it goes in a control sized room.
I have no idea how to do this. Please help. Or is there an available example.
Thanks.Hello,
It sounds like you want to look into a Vision technique called Pattern Matching. Using our Vision tools, you can look for a image, called a template, within another image. Vision will scan over the entire image of interest trying to see if there are any matches with the template. It will return the number of matches and their coordinates within the image of interest. You would take a picture of the object and use it as the template to search for. Then, take a picture of the entire room and use pattern matching to determine at what coordinates that template is found in the picture. Doing this multiple times, you can track the movement of the object as it moves throughout the room. If you have a motion system that will have to move the camera for you, it will complicate matters very much, but would still be possible to do. You would have to have a feedback loop that, depending on where the object is located, adjusts the angle of the camera appropriately.
There are a number of different examples a that perform pattern matching. There are three available in the example finder. In LabVIEW, navigate to "Help » Find Examples". On the "Browse" tab, browse according to "Directory Structure". Navigate to "Vision » 2. Functions". There are examples for "Pattern Matching", "Color Pattern Matching", and "Geometric Matching". There are also dozens of pattern matching documents and example programs on our website. From the homepage at www.ni.com, you can search in the top-right corner the entire site for the keywords, "pattern matching".
If you have Vision Assistant, you can use this to set up the pattern matching sequence. When it is complete and customized to your liking, you can convert that into LabVIEW code by navigating to "Tools » Create LabVIEW VI..." This is probably the easiest way to customize any type of vision application in general.
I hope this helps you get started. Take care and good luck!
Regards,Aaron B.
Applications Engineering
National Instruments -
2 line carplate number recognition using vision assistant software
I am doing FYP(Final Year Project). I am using national instruments vision assistant 2010 and want to recognize car plate number.
I already try out single line car plate number ready. Now my probelm is cannot recognize 2-line car plate number. I also still trying auto detect the car palte number and recognize the car palte number. I don't want to manuals select the car plate number. please find out for me solutions.
I also attach the file (2-line car plate number photo) also.
Thanks alots
Attachments:
01112011247.jpg 1806 KBHello,
using simple calibration model, you realize that you disregard the lens distortion? It is also advisable that the camera is perpendicular to the inspected object. Here, a telecentric lens helps too.
The simple calibration performs direct conversion from pixels to real-world units. For example, if you know a real world object that you measure is 10 mm in size and it occupies 10 pixels on the image, then the conversion factor is 10mm/10pix = 1mm/pix.
So, you want to measure an object with size U (in pixels), you use the equation:
X = 1mm/pix * U (note that this is for horizontal direction, vertical direction is analogous)
Best regards,
K
https://decibel.ni.com/content/blogs/kl3m3n
"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful." -
I am getting the following error message when I try invoking
a webservice.
Could not generate stub objects for web service invocation.
Name: ProgrammePrivilege. WSDL:
https://clientaccweb.reseaudistinction.com/CardHolderInfo.asmx?WSDL.
org.xml.sax.SAXException: Fatal Error: URI=null Line=11: The
element type "META" must be terminated by the matching end-tag "".
It is recommended that you use a web browser to retrieve and
examine the requested WSDL document for correctness. If the
requested WSDL document can't be retrieved or it is dynamically
generated, it is likely that the target web service has programming
errors.
The problem is, the webservice is working fine, the
application was working yesterday, the error message just appears
after a couple of days and I have to refresh the service in the CF
Administrator. Once I refresh it, everything starts working again.
Anyone else got this problem? ANY help would be appreciated!
If you guys need my code anyway, I can attach it but like I
said, everything works for a couple of days, then, out of the blue,
it stops working so I doubt that it's my CFINVOKE that's the
problem...Similar kind of problems here - reported back to Adobe a
couple of months ago, so let's wait and hope for the best. My
problems have related to registering multiple web services and
executing them. One problem is that, If I register two identical
(and quite complex) web services, I can only execute either of
them. After CF restart, either of them works, but invoking the
other doesn't work.
For example; CF_Restart -> Try A first, A works -> B
doesn't. Also, CF_Restart -> Try B first, B works -> A
doesn't.
CFMX7.0.2, Apache 2.2, WinXP -
Use a web browser as the source for the vision assistant
I want to access an ip camera over the internet and then use its video feed to do some processing with the vision assistant. I was wondering if someone could tell me if this is possible and how can it be done. what I have so far is an application that works with local cameras and I also have an example of a web broser in labview. I thought I could use the web browser and use a local variable from the browser in order to get the image, but this can't be wired to the grab or snap, because its not an image, so can someone please tell me how to convert the browser into a video feed, in order for me to use it in my application.
Crop the image out of the print screen. I imagine your screen will be a much larger resolution then the camer, and will only take up a portion of your browser window.
Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously. -
Make a Line Fit with Vision Assistant for a polynominal function?!
Hello
I do have following problem to solve: I do have a laser spot which draws a line (non linear) to a wall. For this line I need to know the (exact) mathematical
function. Therefore I can get an image of the line but I do not know how I can extract the mathematical function with a line fit for example. If I could "convert"
the line into points I would use the line fit function of LabView which should work without problem.
Is there a way to solve the problem with the vision assistant or..?
Thanks in advance
Solved!
Go to Solution.Hello,
by now I have learned that (almost) anything is possible. You can achieve this using Labview, Matlab, C++, etc... In any case, getting the coordinates of a single laser line (single laser line - you don't need to find correspondences, as opposed to multi-line projection) should be really simple. If you place an apropriate filter in front of the camera, it is even simpler!
If you want proof it can be done (and the description/procedute I used), check out the link in my signature and search for laser scanner posts (I think there is three of them, if I remember correctly). I have made a really cheap scanner (total cost was around 45 eur). The only problem is that it is not fully calibrated. If you want to make precise distance measurements, you need to calibrate it, for example using a body of known shape. There are quite a few calibration methods - search in papers online.
Best regards,
K
https://decibel.ni.com/content/blogs/kl3m3n
"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful." -
Images in Vision Assistant are distorted from a FLIR thermo camera
I'm trying to view my FLIR A315 in Vision Assistant and MAX (not at the same time), but the images keep coming up distorted. I can tell there is data being sent, because the histogram shows info and if I mess around with the Attributes I can clear the image up some but it never get as clear as FLIR software. I'm sure I'm missing something simple but this post seemed a good place to start. Thanks
Attachments:
Image in Vision Assistant.png 81 KBHi Recordpro,
It could be your pixel format settings. Open Measurement and Automation Explorer and select you camera. Then click the Acquisition Attributes tab at the bottom and change your pixel format. If that does not work here are some good documents on GigE cameras.
Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part I
http://www.ni.com/white-paper/5651/en
Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part II
http://www.ni.com/white-paper/5750/en
Troubleshooting GigE Vision Cameras
http://www.ni.com/white-paper/5846/en
Tim O
Applications Engineer
National Instruments
Maybe you are looking for
-
This is the error message I get when I try and open Firefox: ''The application or DLL C:\Program Files\Mozilla Firefox\WSOCK32.dll is not a valid Windows image. Please check this against your installation diskette.''
-
Error while deploying the aplication-jco error
Hello Experts, i know thousends of questions in jco error. I have followed that threads , still i am getting an error. Plz help me out from that error. My EP server is running on was 6.20, and xi server is running on was 6.40. what i have done: I h
-
Conversion anomalies with Acrobat Professional 8 and Word 2007
We recently upgraded to Word 2007 from 2003. I updated a Word 2003 document, as I do every year, with minimal changes. When I converted it to pdf with Acrobat 8.1.5, two unexpected things happened. One, none of the hyperlinks converted--they've alway
-
How to keep an older I-pad Separate with just Child friendly apps and Newer on with all apps
I have a 1st generation Ipad and have just bought a new 4th generation Ipad. How can I keep the two separate so the older one only has children's apps and movies and the newer one has all of my apps and movies?
-
Change the hover tool tip in the Skin
Currently, the "Close" buttone in the Captivate skin says the word "Exit" when you hover over it. We changed the SWF so that the X icon is now a checkmark symbol because we are using the CLOSE button as an insurance policy to send the SCO calls back