Modifying VI - Vision Assistant

I have images that consist of an area of particles and an aera of no particles. I am trying to fit a circle to the edge, between the regions where there are and are not particles. I want to use the find edge tool, and I want to find the pixel where this transistion takes place for every row of pixels. For example, I want to draw a horizontal line that will give me the location of the edge then move down one row and repeat. I have tried using the find circle edge, but since I am trying to fit a circel to an edge that isn't well defined, I need a lot more data points to average over. I figure there is a way to modify the VI to perform the process I described above. Any help would be much appreciated. I have attached the images to give you a better idea of what I'm trying to do.
Attachments:
half circle.JPG ‏554 KB

Hi Windom,
If the find circle edge is not working for you, I would suggest thresholding the image. Then you could use the morphology functions (such as Close, Fill Holes, and Erode) to further manipulate the image to get a stronger edge between the areas of particles and not particles.
You can use a For Loop (initialized to start looking at the top of picture) and have it iterate vertically down the picture with the Edge Detector. You can do that by changing the ROI Descriptor for the line you are detecting edges with, and then you can read the Edge Information out of the VI. These all need to be checked in the "Select Controls" menu, which is found at the bottom right of the Vision Assistant window.
I hope this helps, let me know if you need any further clarification.
Best Regards,
Nathan B
Applications Engineer
National Instruments
Attachments:
SelectROI.JPG ‏12 KB
Edge Info.JPG ‏10 KB
SelectControls.JPG ‏14 KB

Similar Messages

  • Use a web browser as the source for the vision assistant

    I want to access an ip camera over the internet and then use its video feed to do some processing with the vision assistant. I was wondering if someone could tell me if this is possible and how can it be done. what I have so far is an application that works with local cameras and I also have an example of a web broser in labview. I thought I could use the web browser and use a local variable from the browser in order to get the image, but this can't be wired to the grab or snap, because its not an image, so can someone please tell me how to convert the browser into a video feed, in order for me to use it in my application.

    Crop the image out of the print screen.  I imagine your screen will be a much larger resolution then the camer, and will only take up a portion of your browser window.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • Make a Line Fit with Vision Assistant for a polynominal function?!

    Hello
    I do have following problem to solve: I do have a laser spot which draws a line (non linear) to a wall. For this line I need to know the (exact) mathematical
    function. Therefore I can get an image of the line but I do not know how I can extract the mathematical function with a line fit for example. If I could "convert"
    the line into points I would use the line fit function of LabView which should work without problem.
    Is there a way to solve the problem with the vision assistant or..?
    Thanks in advance
    Solved!
    Go to Solution.

    Hello,
    by now I have learned that (almost) anything is possible. You can achieve this using Labview, Matlab, C++, etc... In any case, getting the coordinates of a single laser line (single laser line - you don't need to find correspondences, as opposed to multi-line projection) should be really simple. If you place an apropriate filter in front of the camera, it is even simpler!
    If you want proof it can be done (and the description/procedute I used), check out the link in my signature and search for laser scanner posts (I think there is three of them, if I remember correctly). I have made a really cheap scanner (total cost was around 45 eur). The only problem is that it is not fully calibrated. If you want to make precise distance measurements, you need to calibrate it, for example using a body of known shape. There are quite a few calibration methods - search in papers online.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Images in Vision Assistant are distorted from a FLIR thermo camera

    I'm trying to view my FLIR A315 in Vision Assistant and MAX (not at the same time), but the images keep coming up distorted.  I can tell there is data being sent, because the histogram shows info and if I mess around with the Attributes I can clear the image up some but it never get as clear as FLIR software.  I'm sure I'm missing something simple but this post seemed a good place to start.  Thanks
    Attachments:
    Image in Vision Assistant.png ‏81 KB

    Hi Recordpro,
    It could be your pixel format settings. Open Measurement and Automation Explorer and select you camera. Then click the Acquisition Attributes tab at the bottom and change your pixel format. If that does not work here are some good documents on GigE cameras.
    Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part I
    http://www.ni.com/white-paper/5651/en
    Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part II
    http://www.ni.com/white-paper/5750/en
    Troubleshooting GigE Vision Cameras
    http://www.ni.com/white-paper/5846/en
    Tim O
    Applications Engineer
    National Instruments

  • Where can I get detailed documentat​ion on the 'IVA' vision vi's created by NI Vision Assistant

    I use the NI Vision Assistant a lot.  Usually I ask it to create a vi automatically from the script.  However, the resulting vi contains several 'IVA' vi's that are completely undocumented (from what I can see).  
    Where can I get more info on these?  Examples of these are  IVA Grayfilters - Nth order.vi and IVA Store Particles Results.vi (two of many such vi's) .  They look like very useful and interesting functions but what exactly they do I can only guess.  
    Any ideas where I can get information on these functions.

    I really appreciate your offer to help, however, it would probably be better for me in the long run if I could get this info from the official help on these functions?
    But as an example, currently I am wondering what does IVA store particle results.vi do?  It is called after IMAQ particle analysis.vi but just seems to be wasting precious processor resources.  When do I need to include this (as is done by default by vision assistant) and when can I cut it out?  It must have some purpose but who knows what this might be?
    Thanks.

  • Labview Vision Assistant Histogramm Problem

    Good Morning,
    i sadly with the Vision Package from Labview. I want to load a picture and interpretation a detail of the picture with black, white and grayvalve. This can i make with the Vision Assistant feature Histogramm. When i build it (the VI with Vision Assistant) comes nothing to the output Histogramm. I have checked the output line Histogramm, but nothing goes out of this.What are my failure?
    Thanks a Lot
    german translation:
    Hallo zusammen,
    ich bin leider noch nicht richtig mit dem Vision Paket vertraut. Ich möchte die Schwarz, Weiß und Grauwerte eines Bildausschnittes auslesen und diese auswerten können. Dank des Vision Assistant, kann ich dies per Mausklicks erzeugen, jedoch erhalte ich am Ende des Vis keine Daten. Selbst wenn ich die Quelle verbinde und ein Anzeigeelement hinzufüge, kommt hinten nichts raus. Was ist mein Fehler?
    Vielen Dank
    -motecpam
    Labview Vision 2011 Servicepack 1
    with Vision Development Modul
    Attachments:
    Screenshot_VI.jpg ‏230 KB
    Screenshot_Vision_Assistant.jpg ‏313 KB

    Hallo Tropper,
    vielen Dank erstmal für die Antwort.
    Leider erhalte ich dennoch keine Ausgabe am Histogramm.
    Das Histogramm wird mir leer angezeigt.
    Ich habe es so erstellt, wie von dir gezeigt.
    Attachments:
    Unbenannt.jpg ‏117 KB

  • I am using Pattern matching using vision assistant . its able to detect the temple which created while configuration

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

  • Problem with advanced threshold in OCR - Vision Assistant 2013

    I'm facing a problem with Vision Assistant 2013
    The OCR charter set file advanced threshold data is always fixed at 255 upper value and optimize for speed is checked.
    I edit them and reopen the file but no change.
    anyone facing the same problem ?
    Attachments:
    Untitled.png ‏7 KB

    Hi Paolo,
    Thanks for your answer. Yeah I have seen the examples and I´m familiar with the use of the OCR VI. I have use it a couple of time already with good results.
    My problem came last week. While I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting.
    I decided to run the test on the vision assistant and it worked perfectly. I assumed my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image to verify.
    I did not change anything on the VI (all parameters are the same), and used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot!
    I´ll start a new thread as you recommended. Hope to find the solution soon. Thanks  
    Regards,
    Esteban

  • How to train OCR using VISION ASSISTANT for multiple character recognition

    Sir I have tried training OCR using Vision Assistant for character recognition. For the process i have used a fixed focus camera but the character i had trained were undetectable. So sir please provide me a liable solution to the problem.
    Thank you.
    I have attached my project description and also the .vi file of my work towards it.
    Attachments:
    Project phase I.vi ‏138 KB
    WP_20140814_17_27_38_Pro.jpg ‏1444 KB

    Can you post a real jpg instead of renaming a bmp to jpg?

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • Kinect - Vision Assistant

    Hi all,
    I am about to start working on a project with Kinect and Vision Assistan (and later with LabView) and I would appreciate if you could give me some advice on what should I do in order to make Vision Assistant let me acquire images with the Kinect camera.
    I have already installed on my computer both "Kinect SDK" and "Kinect Developer Toolkit" (versions 1.7 and 1.7.0 respectively) as well as LabView 2013 and the Vision Development Module, version 2013 too.
    I have also installed the correspondent drivers from this link: http://joule.ni.com/nidu/cds/view/p/id/4409/lang/en
    Kinect camera is recognised properly by the computer, as it is shown on the Device Manager of the Control Panel. I have even tried the Kinesthesia example VI, and it works alright, so I guess the issue of not being recognised on Vision Assistant has nothing to do with the installation of the camera.
    Any help would be really appreciated
    Best regards!

    Hello,
    you can also use PCL (point cloud library - search google) to build a DLL (dynamic link library), which you can call in Labview. For example:
    https://decibel.ni.com/content/blogs/kl3m3n/2013/01/08/using-microsoft-kinect-to-visualize-3d-object...
    In this way, you can obtain both RGB and calibrated XYZ data (i.e., point cloud with texture).
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Vision Assistant 2014 (and DLL inside of LV) repeatedly crashes while reading multiline OCR

    I am working on an OCR application. With certain files (not all) the multiline function just crashes. No questions asked.  I set the ROI area to the targeted text and without fail it causes a crash.
    Is anyone aware of this problem, and if so is there a solution?
    Additional information:  My .abc file is from scratch with default settings and trained characters. Nothing in it should be causing the crash.  Images are simple scanned jpg files.
    Edit: I tested it. Crash still happens without any character set loaded, and no changes to threshold, size, or read options.

    I cut out an offending section of one of the files. It is attached.  To replicate the error:
    1. Load image
    2. Identification -> OCR/OCV
    3. New Character Set File...
    4. Select entire text.
    NI Vision Assistant 2014, 64 bit. 
    Edit: Posted this before I saw your request   On the "Full sized" image, it crashes always - even when it is called from inside of LabVIEW (the DLL crashes then).  It is how I noticed the bug - my program kept crashing.  Tor this image, it seems to only crash for me inside of the training assistant, but is again reliable.
    Attachments:
    crash.png ‏731 KB

  • Extract color planes in vision assistant

    Hi!
    When i try to use the "find edges" step in vision builder ai, a message tells me that the step supports only grayscale 8 bit images, while mine is a color 32 bit. i shall insert a vision assistant step to extract color planes.
    But in the vision assistant, "the image is not accessible anymore", even though below in the box  there is the step "original image".  I can insert the image i want to work with with the step "get image", but then it works only with the one selected picture and doesnt cycle trough the folder.
    What did i do wrong?
    Sophia

    Hello Sophia,
    you can see the panel code for the vision assistant in the attachment.
    A cycle through the folder with vision steps is not possible (see Link).
     http://engineering.natinst.com/Applications/PSC.nsf/WebAllInfo/d4cdf720a4b9dc6186256d9800676a2d?opendocument
    Kind regards.
    Elmar W.
    Attachments:
    Panel.png ‏3 KB

  • NI Vision Builder vs. LabVIEW RT + NI Vision Assistant

    Hello
    I’m designing a vision application which should be able to run stand-alone (EVS-1463RT). The app will have up to 30 inspection states (read gauge value, checking for the presence specific objects, etc.) approximately. The second requirements are the communication ability with other devices not NI via TCP/IP and logging pictures in FTP.
    Now I’m thinking about the two possible solutions.
           Create an AI with NI Vision Builder
           Create LabVIEW RT app and with using NI Vision Assistant build inspection states
    A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited. In the other hand the building the inspection is “easy”.
    The second solution has following advantages for me: better control of the app, maybe better possibility to optimize the code, implementation my own TCP/IP server. My biggest concern is the using NI Vision Assistant to generate the inspection states.
    In conclusion I have to say that I’m not experience in the vision app but the LV RT is no problem for me.
    Thanks for any opinions
    Jan

    Hi Jan,
    > A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited.
    Could you give more feedback on this point? What do you mean by "using of the server is limited". More precise feedback and suggestions would help us improve the functionality.
    What I would recommend you look into is the use of the VBAI API. You can find examples in <Program Files>\National Instruments\<Vision Builder AI>\API Examples\LabVIEW Examples
    This features allows you to run VBAI inspection within your LabVIEW application, an retrieve results that you can send using a TCP implementation of your own in LabVIEW, without having to use the VBAI TCP functionality.
    You retain the configuration feature of the Vision part of the application, and can add extra code in LabVIEW.
    The API functions allow to basically open an VBAI inspection file, run it synchronously or asynchonously, and retrieve images and results.
    As you mentioned, the other solution is to implement your state diagram in LabVIEW and use the Vision Assistant Express VI in different states. What VBAI give you that Vision Assistant doesn't is the pass/fail limits for each step (and the state diagram).
    Best regards,
    Christophe

  • Can not download update Vision Assistant Version 7.1

    i really can not directly download update Vision Assistant Version 7.1 from this site http://digital.ni.com/softlib.nsf/websearch/AF2B1355764E96C786256E9A00544EB9?opendocument&node=13206...
    Is there any altenative website that easily can download the update?
    thanks

    finally,i can directly download that update Vision Assistant Version 7.1 from this site http://digital.ni.com/softlib.nsf/websearch/AF2B1355764E96C786256E9A00544EB9?opendocument&node=13206...
    unfortunately its not version 7.1..but 7.0.1 !!!...
    i need version Vision Assistant 7.1 bcos  i need to install  usb_installer_setup.exe (usb webcam)
    im using labview 7.1..
    plz help me..

Maybe you are looking for

  • Animated gif issue in IE - files posted

    Hi Linda (and others who would like to check this out), I have posted the files I got from work. They are at: http://www.fayec.com/pngtest/animated3.png http://www.fayec.com/pngtest/animated3.gif You will see that the gif image plays perfectly in Moz

  • HP 550 Battery no longer recognized. Suspect hardware failure.

    1. HP 550 Laptop (4GB RAM, Intel Core 2 Duo). 2. Windows 7 64-bit 3. No error messages. 4. See description. A couple of months ago, I trod on the power jack of my laptop's power supply, so as a temporary fix I went to the electronics shop, found a pl

  • Need help reseting

    I have a little problem with resseting my Mac Mini late 2007 model with osx 10.5.8. The problem is that when I do the trick with the option key, I seem to not have a recovery. So what is te problem?

  • How to find the URL of deployed Web services?

    Hi,guys. I have created a Web service, and deployed to WAS.But where is the accessing URL? WAS has created several WSDLs for the serviece, but I can not find the URL in the files. I think that the URL looks like "http://localhost:50000/XISOAPAdapter/

  • Hi All,problem in synchronizing toughbook

    Hi All,          Pls provide the solution for the problem  when synchronizing toughbook  .Its Urgent .