Rewrite template vision assistance for image match

hey guys , im using labview to detect an image and record its x-y coordinates.
does anybody know how i can automatically change , in runtime, the template used to match .
i get my template from the Vision Assistant Block 
please please help
Kevin
Solved!
Go to Solution.

Hey Kevin,
If you configure the Vision Assistant block and place it on your block diagram, you can then right click on it and select "Open Front Panel."
This shows you the LabVIEW code that the Express VI is running behind the scenes. From here you can find any parameters you have set, put down as constants. So for example, the file path you chose for the template is on the block diagram as a file path constant. 
You can right click on this and change it to a control, and then link this control to the connector pane
Reference: http://zone.ni.com/reference/en-XX/help/371361H-01​/lvconcepts/building_connector_pane/
After saving this VI, and closing it, you will notice some changes to your main VI. Firstly the vision assistant block will have changed colour to yellow, and secondly, it will have one more input added to the bottom (if you drag it down) which is for the control you set in the subVI.
The attached example then adds a case structure to select the file path you want to use as a template during runtime. It would be better practice to use an event structure, but hopefully this works for demonstrative purposes.
Let me know if this helps out Kevin.
Thanks,
Justin, Applications Engineer
Attachments:
Vision example.PNG ‏40 KB

Similar Messages

  • How to train OCR using VISION ASSISTANT for multiple character recognition

    Sir I have tried training OCR using Vision Assistant for character recognition. For the process i have used a fixed focus camera but the character i had trained were undetectable. So sir please provide me a liable solution to the problem.
    Thank you.
    I have attached my project description and also the .vi file of my work towards it.
    Attachments:
    Project phase I.vi ‏138 KB
    WP_20140814_17_27_38_Pro.jpg ‏1444 KB

    Can you post a real jpg instead of renaming a bmp to jpg?

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • Make a Line Fit with Vision Assistant for a polynominal function?!

    Hello
    I do have following problem to solve: I do have a laser spot which draws a line (non linear) to a wall. For this line I need to know the (exact) mathematical
    function. Therefore I can get an image of the line but I do not know how I can extract the mathematical function with a line fit for example. If I could "convert"
    the line into points I would use the line fit function of LabView which should work without problem.
    Is there a way to solve the problem with the vision assistant or..?
    Thanks in advance
    Solved!
    Go to Solution.

    Hello,
    by now I have learned that (almost) anything is possible. You can achieve this using Labview, Matlab, C++, etc... In any case, getting the coordinates of a single laser line (single laser line - you don't need to find correspondences, as opposed to multi-line projection) should be really simple. If you place an apropriate filter in front of the camera, it is even simpler!
    If you want proof it can be done (and the description/procedute I used), check out the link in my signature and search for laser scanner posts (I think there is three of them, if I remember correctly). I have made a really cheap scanner (total cost was around 45 eur). The only problem is that it is not fully calibrated. If you want to make precise distance measurements, you need to calibrate it, for example using a body of known shape. There are quite a few calibration methods - search in papers online.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Vision assistant subtract images

    How do you subtract two images?
    I have load two images in vision assistant and I can "flip" between them but I cannot figure out how two subtract them. I guess I need to use the image buffer but it seems to load the same image into the buffer as I am working on. 

    Hi sorinpetrila
    You would get more responses if you create a new post.
    Working with buffers in IMAQ is fairly simple. First you need to figure out if you want a sequence (single run through where you fill up the buffer), or if you want a sequence ring. Where you start to overwrite the start of you buffer after X number of images.
    You would need to use the first option for a single acquisition where you have enought memory to store all the images you want which is normal for if you want to acquire i.e. 100 images and then process them afterwords
    The second option you would use in 2 circuimstances
    1) You have images coming in a burst. 100 pictures wait 1 second 100 pictures wait 1 second.
    2) It takes you longer time to process the image than to acquire it (i.e. 5 times as long). And you would there fore need to store the last 5 images in the buffer so you can handle the longer processing time without acquisition time to go down.
    The best way to learn these techniques is to look at the shipping examples:
    LabVIEW --> Help --> Find Examples
    Browse for task
    Hardware Input and Output --> Vision Acquisition --> NI-IMAQdx
    High LEvel Sequence.vi shows you option 1
    Low Level Grab shows you option 2 as far as I remember (don't have my LV computer here)
    National Instruments have a Machine Vision course which thoroughly walks through these principles and have exercises in both examples.
    Best Regards
    Anders Rohde, CLD, Platinum Applications Engineer

  • Upgrade issues with Vision Assistant

    Hi Guys,
    I have encountered a problem with finding and analysing data from vision assistant since I upgraded to the new labVIEW 2013 SP2 a couple of weeks back. Within my application I perform countless camera inspections using vision assistant however i have noticed that for 4 particular inspections my vision assistant does not perform as it should. All other inspections have worked perfectly which is quite strange. Even the scripts are the same along witht the file directory locations. The first error was -1074396120 IMAQ Match Pattern3 saying the possible error was because it was not an image etc. However, after instancing a known working script into vision assistant for that inspection i receive an image but I dont receive an output from visin assisant that I am looking for.
    Within my inspection I look for 4 templates and an OCR check etc and perform boolean logic to give me a true/false result. When i run my application i always get a false result, even my probes tell me there was no template match etc but when i open up the script and run it all the templates are picked up! I created a new less complicated inspection where i only look for one pattern and again i get a false result. Has anyone got any ideas? I am a bit lost at the moment considering non of the other inspections have been affected. Even creating new file paths for directories but no joy!
    Thanks in advance guys! 

    Hi Chris,
    I was away for a few weeks. I have attached an image with template. The version I currently run is Development 2013 SP1 and the Vision Development module 2013 SP1 see attachment (Doc1). The executable runs perfectly for all of my inspections which is over 100 and only 2/3 fail because it will not find the template. The template has the same file paths as all the other inspections. When running the application i receive a fail because it doesnt see a template i believe. However, when i open the vision assistant express VI and run through the script it finds the templates no problem. Any advice is welcome.
    Damien
    Attachments:
    230.png ‏2220 KB
    Edge Template2.png ‏223 KB
    Doc1.docx ‏115 KB

  • Extract color planes in vision assistant

    Hi!
    When i try to use the "find edges" step in vision builder ai, a message tells me that the step supports only grayscale 8 bit images, while mine is a color 32 bit. i shall insert a vision assistant step to extract color planes.
    But in the vision assistant, "the image is not accessible anymore", even though below in the box  there is the step "original image".  I can insert the image i want to work with with the step "get image", but then it works only with the one selected picture and doesnt cycle trough the folder.
    What did i do wrong?
    Sophia

    Hello Sophia,
    you can see the panel code for the vision assistant in the attachment.
    A cycle through the folder with vision steps is not possible (see Link).
     http://engineering.natinst.com/Applications/PSC.nsf/WebAllInfo/d4cdf720a4b9dc6186256d9800676a2d?opendocument
    Kind regards.
    Elmar W.
    Attachments:
    Panel.png ‏3 KB

  • Vision assistant steps to be followed for pattern matching

    I am acquiring color images of hands movement using web camera of laptop.
    I want to process the acquired images to use for pattern matching.
    What are the steps to be followed to achieve the above mentioned task.

    In the following we proceed to function block search pattern extracted in the previous process (the parameters as rotation angle and minimum score is inserted into SETTINGS control), extract the output of the search function to get the position values indicators that will be displayed on the front panel)
    Atom
    Certified LabVIEW Associate Developer

  • Use a web browser as the source for the vision assistant

    I want to access an ip camera over the internet and then use its video feed to do some processing with the vision assistant. I was wondering if someone could tell me if this is possible and how can it be done. what I have so far is an application that works with local cameras and I also have an example of a web broser in labview. I thought I could use the web browser and use a local variable from the browser in order to get the image, but this can't be wired to the grab or snap, because its not an image, so can someone please tell me how to convert the browser into a video feed, in order for me to use it in my application.

    Crop the image out of the print screen.  I imagine your screen will be a much larger resolution then the camer, and will only take up a portion of your browser window.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • How can i compare two color images in vision builder for AI?

    What i want to do is to compare two images. I have a base color image that represents the desired colors and tones. I have another image to be compared to the base image. What i want to do is to compare this two images to know how close they are regarding contents of color and tones. In other words, i want to know how close is image 2 to image 1 (base sample)....
    I would like to know how to get the content of certain colors in an image and then compare this values with the same values from another image.
    For example..i have two sheets of paper that contain various mixed colors...i want to know the amount of green, red an blue in each image and then comapre this values.
    What i want to do is to compare difer
    ent samples of fabrics. this fabrics must be of a specified color...but due to the process they may vary in tone or even color...so i want to compare this fabrics to a amster sample to see how close they are in color and tone..
    Anything would help since i dont have experience in this type of comparisons...thanks

    VBAI allows you to work with grayscale images only. You can acquire an image, use the vision assistant to convert it grayscale by extracting the luminance plane (or any of the other color planes) and then analyze the resulting grayscale image. To do what you are talking about, though, it would really be better to get Vision for labview. You could then take color images, compare color plains, use statistical functions to determine average color values, and so on.

  • I am using Pattern matching using vision assistant . its able to detect the temple which created while configuration

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

  • So I want to use an imaging device with Vision Assistant then incorporate that into my LabView 8.6 code. How can I do this?

    So here's what I have so far written, I have LabView program to interface with my microscope stage, autofocus. I want to try now to interface the camera using Vision Assistant.
    The procedure I want the program to perform is taking this CR-39 detectors and placing them on this stage and the camera to take images of this detector frame by frame. So I want the program to go to the very top corner of the detector on the stage. Then the camera would autofocus (if needed) and take an image. So this would take an image for every micron of the detector on that row. Then move to the next  and so on. Until it has successfully imaged the whole detector.  Is anyone well versed on Vision Assistant and willing to give some advice on this? It would be greatly appreciated.
    The purpose is eliminate the previous method of acquiring images by manually focusing on the tracks created by the particles. I want to this program to run itself without a user being present, so just pressing a button would send the program into autopilot and it would perform the procedure.
    What are CR-39 detectors?
    Allyl Diglycol Carbonate (ADC) is a plastic polymer. When ionizing radiation passes through the CR-39 it breaks the chemical bonds of the polymer and it creates a latent damage trail along the trajectory of the particles. These trails are further ethced using a chemical solution to make the tracts larger which can finally be seeing using an optical microscope.
    Thank you,
    HumanCondition

    Hi Justin,
    Yes I already have a vi that controls the stage.
    So the camera is stationary above the stage. Now what I want to do is get the camera to take the pictures of the detector by moving the stage frame by frame for example starting from the top right corner of the detector (by taking an image) then ending on the top left corner of the detector (by taking an image) then having the stage move down a row and starting the process over again until each frame (or piece) has been imaged.
    My goal is while the stage is moving to every little frame of the detector I would like for it to autofocus if necessary, and take a image to be for each frame of the detector.
    Using the auto focus requires the moving of the stage up and down.
    HumanCondition

  • No Image - 1473R With Vision Assistant

    Hopefully this is a simple fix and I'm missing something very obvious, so here is what's up. I'm originally a text programmer but for this project I'm stuck using LabVIEW which is completely unfamiliar; I've been working on this for days with no progress so I thougt I'd see if anyone had some pointers. The goal of the project is to use the PCIe-1473R FPGA to do live gain control, overlay, and maybe some recognition.
    I started with the "Image Processing with Vision Assistant on FPGA" example and made a few simple changes to just attempt to get a video feed through. The camera we are using is a Pulnix TM 1325 CL, which outputs a 1 tap/10 bit Camera Link signal. Since this example VI is originally configured 1 tap/8 bit I changed the incoming pixel to be read as 1 tap/10 bit and compiled and tested. When I try to start video acquisition I get no errors but no frames are grabbed. The frame's acquisitioned count does not increase and nothing is displayed. If I switch to line scan I get a scanning static image, but this is not a line scan camera and my other NI frame grabber card shows an image from the camera fine.
    I wasn't all that surprised with this result, as the input is 10 bit and the acquisition FIFO and DMA FIFO are both 8 bit orginally. So, I changed them to U16 and also changed IMAQ FPGA FIFO to Pixel Bus and IMAQ FPGA Pixel Bus to FIFO blocks on either side of the Vision Assistant to U16. With this configuration, I again get no image at all; same results. I suspect this is because the incoming image is signed, so the types should be I16 instead. However, there is no setting for I16 on the Pixel Bus conversion methods. Am I not understanding the types happneing here or is there an alternative method for using Vision Assistant with signed images? I'd think it'd be odd not have support for signed input.
    Anyway, I've tried all the different combos of settings I can think of. Does anyone have any input? I feel like it must be either a buffer size problem or a signing problem, but I don't really know. Any and all input is welcome!
    Thanks for helping out a new guy,
    Kidron

    I ended up resolving this issue by switching cameras. The end goal was to use a FLIR SC6100, so I switched to it and was able to get things working shortly. The FLIR does use unsigned ints, so I believe that's what resovled the issue for anyone running into this in the future.

  • Maximum number of Select Images/Vision Assistants

    Hardware:         CVS 1454
    Two cameras:   AVT Guppy F036C
    MAX Settings   
            680x480 Y (Mono 8) 15fps
            Speed: 400MBs
            Packet Size: 680bytes 
            Gamma Off
            Bayer Pattern: BGBG_GRGR
    Connection:      Using two firewire cables directly to the CVS
    Software:          Vision Builder 2010 and Vision Builder 2009
           I recently started to create a Inspection in VB2010. When I press "Run Inspection Once" button, it would lose connection with the CVS.  I watched the free memory in MAX and it went from 32MBs to 3MBs before it would lose connection. I simplified my Inspection to use fewer states, eliminated most of the variables in an attempt to use less memory. I put the camera settings as low as possible in MAX. Eventually I got it to stop losing connection to the CVS.
           Now that I could go through the states after it went through one iteration I saw that one or both of the Acquire Images would fail. Especially if I went into Inspection Interface and back to Configuration Interface. I figured it was a memory limitation since the CVS 1454 only has 128MBs, 96 of which is occupied by VB2010. I got another CVS, formatted it, and installed VB2009 since it had 44MBs of free memory.
           In VB2009 I put all of the test steps into one long state. The program has:
                  two acquire steps in one state
                  two states for creating ROIs(one state for each image) for a total of 2 Select Images
                  a test state which has two Select Image steps, one vision assistant, along with other test steps
           If I add one more select image or one more vision assistant those steps will simply fail to do anything. The inspection manages to finish running but anything involving that final Select Image/Vision Assistant will not work.
    Has anyone else experience a limitation to the number of select images or vision assistant steps they can use? I have a working inspection now but I need to be aware of the limitations as I try to make more robust Inspections.
    Is it a problem with the amount of user space in the CVS 1454 or is there just a limited number of pointers in the VB code that limits the number of references that can be made?

    My bet is you're just too close to the memory limit on that device. A quick test would be to just acquire images from your camera(s) without any Vision Assistant/processing steps and make sure this works for long periods of time without problems. Use the same cameras, and triggering settings to make it as close The Vision Assistant steps make a copy of the image (as well as Threshold). You can use the Select Image step to see which steps create a copy of the image. If you are really dead set on the CVS, you could try replacing the steps that make a copy of the image with a Run LabVIEW VI that does the same same functionality and minimize the copies you do in the Run LV code. This should save you your image size x the number of processing steps you can replace with Run LabVIEW. You're kind of at the edge of what the HW can do, so I wouldn't normally recommend this, but if you're desperate and don't want to upgrade HW, you could try using Run LV to help do several of the steps in VBAI that are making copies of the image.
    Hope this helps,
    Brad

  • Images in Vision Assistant are distorted from a FLIR thermo camera

    I'm trying to view my FLIR A315 in Vision Assistant and MAX (not at the same time), but the images keep coming up distorted.  I can tell there is data being sent, because the histogram shows info and if I mess around with the Attributes I can clear the image up some but it never get as clear as FLIR software.  I'm sure I'm missing something simple but this post seemed a good place to start.  Thanks
    Attachments:
    Image in Vision Assistant.png ‏81 KB

    Hi Recordpro,
    It could be your pixel format settings. Open Measurement and Automation Explorer and select you camera. Then click the Acquisition Attributes tab at the bottom and change your pixel format. If that does not work here are some good documents on GigE cameras.
    Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part I
    http://www.ni.com/white-paper/5651/en
    Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part II
    http://www.ni.com/white-paper/5750/en
    Troubleshooting GigE Vision Cameras
    http://www.ni.com/white-paper/5846/en
    Tim O
    Applications Engineer
    National Instruments

Maybe you are looking for