Text detection in vision assistant

hi all...i have labview 11 installed with me along with vision assistant. i found out that using OCR we can train it to read a string. but is it only 1 string for a program? even though i can train the progrm with all 26 letter templates, it recognises on the original string..
please guide me to develop a program in OCR where it can read any string that i show to it...please i need your help. thanks in advance

Hi Recordpro,
It could be your pixel format settings. Open Measurement and Automation Explorer and select you camera. Then click the Acquisition Attributes tab at the bottom and change your pixel format. If that does not work here are some good documents on GigE cameras.
Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part I
http://www.ni.com/white-paper/5651/en
Acquiring from GigE Vision Cameras with Vision Acquisition Software - Part II
http://www.ni.com/white-paper/5750/en
Troubleshooting GigE Vision Cameras
http://www.ni.com/white-paper/5846/en
Tim O
Applications Engineer
National Instruments

Similar Messages

  • How to train two line texts in NI Vision assistant ?

    Hi guys,
    I have managed to train single line of text in OCR (NI Assistant)..as attached..however I dont know how to train two (or more) line of texts..for example the car's plate number below:
    WDP
    3194            - as attached 
    I want my script.scr file to be abled to read any type of line/string..whether it's single, double, triple etc..
    Can anyone show me the way how to do it.
    Thank you in advance
    Suliadi

    Starting with Vision Development Module 2013, OCR supports multi-line reading. You can create a single region of interest that emcompasses all of the lines that you need to read along with the expected number of lines and it will output the text for the best lines. If you are using an earlier version, you will need to have a separate region of interest for each line of text. If you don't know how many lines you will have, one method is to do an edge detection to determine if text is present.
    You can learn more about the multi-line support in this white paper and even more information in the Vision Concepts Manual.

  • Vision Assistant 2014 (and DLL inside of LV) repeatedly crashes while reading multiline OCR

    I am working on an OCR application. With certain files (not all) the multiline function just crashes. No questions asked.  I set the ROI area to the targeted text and without fail it causes a crash.
    Is anyone aware of this problem, and if so is there a solution?
    Additional information:  My .abc file is from scratch with default settings and trained characters. Nothing in it should be causing the crash.  Images are simple scanned jpg files.
    Edit: I tested it. Crash still happens without any character set loaded, and no changes to threshold, size, or read options.

    I cut out an offending section of one of the files. It is attached.  To replicate the error:
    1. Load image
    2. Identification -> OCR/OCV
    3. New Character Set File...
    4. Select entire text.
    NI Vision Assistant 2014, 64 bit. 
    Edit: Posted this before I saw your request   On the "Full sized" image, it crashes always - even when it is called from inside of LabVIEW (the DLL crashes then).  It is how I noticed the bug - my program kept crashing.  Tor this image, it seems to only crash for me inside of the training assistant, but is again reliable.
    Attachments:
    crash.png ‏731 KB

  • I am using Pattern matching using vision assistant . its able to detect the temple which created while configuration

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

    Hi all , 
             i am using vision assistant to do patten matching . its able to find the pattern with created template in vision assistant .
            if i load some other pattern is not detecting .. 
             i have attached my vi file 
    Attachments:
    PM_light.vi ‏266 KB

  • Intégrer le code généré par NI vision assistant à mon programme codé sous LabWindows CVI

    Bonjour à toutes et à tous,
    Dans la cadre d'un projet je doit faire de la vision par ordinateur via le logiciel National Instrument (NI) "Vision Assistant". Les fonctions de l'assistant dont je veux me servir seront simplement "Histrogramme", "thershold" et "circle detection".
    Mon problème, c'est que j'aimerai intégrer véritablement, l'image donné en direct de ma camera, les outils (boutons) de sélections sur l'image et les résultats obtenus après traitement (que l'Assistant donne) à mon programme déjà existant (sur l'UIR du programme) !
    Je ne comprend pas (j'ai bien lu le tutoriel Vision assistant to cvi sans succès) comment utiliser le code généré par l'assistant dans CVI pour faire apparaitre ce dont j'ai besoin (l'image donnée par ma camera, les boutons de selections de l'assistant et dans mon programme déjà codé.
    De plus si je compile directement le code que l'assistant me genere (seul), il y a des erreurs liées a la bibliothèque <nimachinevision.h> : "redlacaration of variables..." bibliothèque que je doit donc enlever du code généré pour qu'il compile correctement !
    Si l'un d'entre vous est déjà passé par là, son aide me serait très précieuse !
    Merci beaucoup !

    Bonjour Pooty,
    Pourriez vous indiquer de quel tutorial vous-êtes vous inspiré ?
    Merci d'avance.
    Mathieu_T
    Certified LabVIEW Developer
    Certified TestStand Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    LabVIEW Tour
    Journées Techniques dans 10 villes en France, du 4 au 20 novembre 2014

  • Modifying VI - Vision Assistant

    I have images that consist of an area of particles and an aera of no particles. I am trying to fit a circle to the edge, between the regions where there are and are not particles. I want to use the find edge tool, and I want to find the pixel where this transistion takes place for every row of pixels. For example, I want to draw a horizontal line that will give me the location of the edge then move down one row and repeat. I have tried using the find circle edge, but since I am trying to fit a circel to an edge that isn't well defined, I need a lot more data points to average over. I figure there is a way to modify the VI to perform the process I described above. Any help would be much appreciated. I have attached the images to give you a better idea of what I'm trying to do.
    Attachments:
    half circle.JPG ‏554 KB

    Hi Windom,
    If the find circle edge is not working for you, I would suggest thresholding the image. Then you could use the morphology functions (such as Close, Fill Holes, and Erode) to further manipulate the image to get a stronger edge between the areas of particles and not particles.
    You can use a For Loop (initialized to start looking at the top of picture) and have it iterate vertically down the picture with the Edge Detector. You can do that by changing the ROI Descriptor for the line you are detecting edges with, and then you can read the Edge Information out of the VI. These all need to be checked in the "Select Controls" menu, which is found at the bottom right of the Vision Assistant window.
    I hope this helps, let me know if you need any further clarification.
    Best Regards,
    Nathan B
    Applications Engineer
    National Instruments
    Attachments:
    SelectROI.JPG ‏12 KB
    Edge Info.JPG ‏10 KB
    SelectControls.JPG ‏14 KB

  • Problem with advanced threshold in OCR - Vision Assistant 2013

    I'm facing a problem with Vision Assistant 2013
    The OCR charter set file advanced threshold data is always fixed at 255 upper value and optimize for speed is checked.
    I edit them and reopen the file but no change.
    anyone facing the same problem ?
    Attachments:
    Untitled.png ‏7 KB

    Hi Paolo,
    Thanks for your answer. Yeah I have seen the examples and I´m familiar with the use of the OCR VI. I have use it a couple of time already with good results.
    My problem came last week. While I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting.
    I decided to run the test on the vision assistant and it worked perfectly. I assumed my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image to verify.
    I did not change anything on the VI (all parameters are the same), and used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot!
    I´ll start a new thread as you recommended. Hope to find the solution soon. Thanks  
    Regards,
    Esteban

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • No Image - 1473R With Vision Assistant

    Hopefully this is a simple fix and I'm missing something very obvious, so here is what's up. I'm originally a text programmer but for this project I'm stuck using LabVIEW which is completely unfamiliar; I've been working on this for days with no progress so I thougt I'd see if anyone had some pointers. The goal of the project is to use the PCIe-1473R FPGA to do live gain control, overlay, and maybe some recognition.
    I started with the "Image Processing with Vision Assistant on FPGA" example and made a few simple changes to just attempt to get a video feed through. The camera we are using is a Pulnix TM 1325 CL, which outputs a 1 tap/10 bit Camera Link signal. Since this example VI is originally configured 1 tap/8 bit I changed the incoming pixel to be read as 1 tap/10 bit and compiled and tested. When I try to start video acquisition I get no errors but no frames are grabbed. The frame's acquisitioned count does not increase and nothing is displayed. If I switch to line scan I get a scanning static image, but this is not a line scan camera and my other NI frame grabber card shows an image from the camera fine.
    I wasn't all that surprised with this result, as the input is 10 bit and the acquisition FIFO and DMA FIFO are both 8 bit orginally. So, I changed them to U16 and also changed IMAQ FPGA FIFO to Pixel Bus and IMAQ FPGA Pixel Bus to FIFO blocks on either side of the Vision Assistant to U16. With this configuration, I again get no image at all; same results. I suspect this is because the incoming image is signed, so the types should be I16 instead. However, there is no setting for I16 on the Pixel Bus conversion methods. Am I not understanding the types happneing here or is there an alternative method for using Vision Assistant with signed images? I'd think it'd be odd not have support for signed input.
    Anyway, I've tried all the different combos of settings I can think of. Does anyone have any input? I feel like it must be either a buffer size problem or a signing problem, but I don't really know. Any and all input is welcome!
    Thanks for helping out a new guy,
    Kidron

    I ended up resolving this issue by switching cameras. The end goal was to use a FLIR SC6100, so I switched to it and was able to get things working shortly. The FLIR does use unsigned ints, so I believe that's what resovled the issue for anyone running into this in the future.

  • Vision assistant in a while loop

    Hello gentlemans, i'm quite new to LabView and Vision assistant. My goal is to detect a car plate from a color image. I succesfully find the ROI by using the two blue rectangles at the edge of italian's car plates, and it works. But then I have to OCR the actual plate, I can do it with success but some pictures require a certain Threshold, others another... So I though, i can run the Vision assistant into a while loop to try different thresholds, when I got a 'valid' plate (7 digits) I simply quit the loop. The parameters are defined by clusters, but it looks like is using always the FIRST value, regardless of the changing threshold. index goes from 0 to 20, and my intention is to have the threshold ranging from 0..100, 0..105, 0..110 and so on. Can you spot something wrong in the attached VI image?? Thanks a lot for any help! Mauro Cucco
    Attachments:
    vi_ocr.png ‏31 KB

    Hi Octopus
    I belive that this problem originates in that the Vision Assistant handles the images references.
    In normal LabVIEW programming every time you have a wire branch or a SubVI new copies of the data is created. For images this is not the case because an image often consume a lot of space in the memory. So for an Image LabVIEW edits the image in the  memory instead of creating a copy.
    In this case your algorithm works the first time, but then the second time it will try to do the OCR on the last result of the OCR rather than the original image.
    To avoid this you need to use the function IMAQ copy, to create a copy of the original image before sending it to the OCR function. See below:
    Best Regards
    Anders Rohde
    Applications Engineer
    National Instruments Denmark

  • Why do the results between the OCR in the Vision Assistant and OCR in a VI generated from the vision assistant might defer?

    Hi everybody,
    I had a problem while I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting on the image.
    I decided to run the test on the vision assistant and it worked perfectly: I got the expected results.#
    I assumed then that my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image, just to verify.I did not change anything on the VI (all parameters are the same), and I used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot! I have checked alll possible configurable parameters and they have the same values as in the Vision assistant. Does anybody have an idea about what could be the reason of this behaviour?  Thanks
    Regards,
    Esteban

    Esteban,
    do you use the same images for OCR? Or do you take a new snap shot before running the OCR?
    What steps of debugging did you use in LV?
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Why do the results between the OCR in the Vision Assistant and a VI generated from the vision assistant might defer?

    Hi everybody,
    I had a problem while I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting on the image.
    I decided to run the test on the vision assistant and it worked perfectly: I got the expected results.#
    I assumed then that my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image, just to verify.I did not change anything on the VI (all parameters are the same), and I used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot! I have checked alll possible configurable parameters and they have the same values as in the Vision assistant. Does anybody have an idea about what could be the reason of this behaviour?  Thanks
    Regards,
    Esteban

    Hi Peter, 
    Strange. It runs here. Attached the screeshot you requested.
    Regards,
    Esteban
    Attachments:
    Capture.png ‏135 KB

  • Vision Assistant Express VI Output Signal

    Hello.
    I am using LabVIEW to try and detect the presents of a part with a USB camera. I have set up a match pattern inspection in Vision Assistant and want to output a Boolean signal if a pattern is matched.
    Thank you

    Thank you for the reply it was really helpful! I have one more question which is still relating to the Vision Assistant Express VI. I'm using the 
    Edge detection, which has a threshold value to output a numeric 1 or 0 depending is a part passes the region of interest. I’m using a numeric compare function to send a Boolean indicator high or low. I want to add to this by counting the number of parts that pass through the area of interest but I’m having a difficult time working out how to do this.
    I would like to set a variable that will increment every time the indicator is pulsed but I haven’t been able to find a way to do this. I’ve also looked into the possibility of using a shift register.
    Thanks

  • Vision assistant missing .llb (ver 8.6)

    Attempting to build a VI using Vision Assistant 8.6.  Using the Shape Detection tool as well.  When building the vi, it is looking for Vision Assistant Utils.llb (IVA Store Shape Circles Results.vi).  I cannot locate it anywhere on my harddrive.  

    Hi MSE From reading this thread it appears that the original issue faced by dre99gsx was resolved by reinstalling Vision Acquisition Software. I take it you are experiencing a similar problem when creating an application using application builder you run into this missing dependency? Are you using both Vision Development Module and Vision Acquisition Software? Are both of these activated?

  • How do you track a moving object using Labview and Vision Assistant

    I am using Vision and Labview to create a program that tracks and follow a moving object using a high end camera. Basically what it does is it detects a foreign object and locks on to it and follows it where ever it goes in a control sized room.
    I have no idea how to do this. Please help. Or is there an available example.
    Thanks.

    Hello,
    It sounds like you want to look into a Vision technique called Pattern Matching.  Using our Vision tools, you can look for a image, called a template, within another image.  Vision will scan over the entire image of interest trying to see if there are any matches with the template.  It will return the number of matches and their coordinates within the image of interest.  You would take a picture of the object and use it as the template to search for.  Then, take a picture of the entire room and use pattern matching to determine at what coordinates that template is found in the picture.  Doing this multiple times, you can track the movement of the object as it moves throughout the room.  If you have a motion system that will have to move the camera for you, it will complicate matters very much, but would still be possible to do.  You would have to have a feedback loop that, depending on where the object is located, adjusts the angle of the camera appropriately.
    There are a number of different examples a that perform pattern matching.  There are three available in the example finder.  In LabVIEW, navigate to "Help » Find Examples".  On the "Browse" tab, browse according to "Directory Structure".  Navigate to "Vision » 2. Functions".  There are examples for "Pattern Matching", "Color Pattern Matching", and "Geometric Matching".  There are also dozens of pattern matching documents and example programs on our website.  From the homepage at www.ni.com, you can search in the top-right corner the entire site for the keywords, "pattern matching". 
    If you have Vision Assistant, you can use this to set up the pattern matching sequence.  When it is complete and customized to your liking, you can convert that into LabVIEW code by navigating to "Tools » Create LabVIEW VI..."  This is probably the easiest way to customize any type of vision application in general.
    I hope this helps you get started.  Take care and good luck!
    Regards,Aaron B.
    Applications Engineering
    National Instruments

Maybe you are looking for