Vision assistant problem

Hi guys,
I am having a problem with vision assistant when i run the script within vision assitant it locates all of my patterns and finds the template file paths etc and all is good. However, when i run my application the vision assistant fails to locate the patterns or the template file paths. Any help in why this is happening woud be of great help.
Regards,
Damien
Solved!
Go to Solution.

Hi,
Thank you for replying. Yes i created inputs paths and i believe the paths are correct. The strange thing is i have a number of inspections that work using vision assistant but for this one inspection it wont work. I have attached a screenshot of the vi with file paths etc. It is the same as other inspections that work so i am lost as to why this particular vi wont locate the templates/patterns.
Damien
Attachments:
Doc1.docx ‏227 KB

Similar Messages

  • Problem with advanced threshold in OCR - Vision Assistant 2013

    I'm facing a problem with Vision Assistant 2013
    The OCR charter set file advanced threshold data is always fixed at 255 upper value and optimize for speed is checked.
    I edit them and reopen the file but no change.
    anyone facing the same problem ?
    Attachments:
    Untitled.png ‏7 KB

    Hi Paolo,
    Thanks for your answer. Yeah I have seen the examples and I´m familiar with the use of the OCR VI. I have use it a couple of time already with good results.
    My problem came last week. While I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting.
    I decided to run the test on the vision assistant and it worked perfectly. I assumed my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image to verify.
    I did not change anything on the VI (all parameters are the same), and used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot!
    I´ll start a new thread as you recommended. Hope to find the solution soon. Thanks  
    Regards,
    Esteban

  • Having problem with vision assistant !

    I turn my picture into image reference and use assistant to modify it
    but the result isn't what I see in the assistant
    can anyone help me?
    Attachments:
    vision test.vi ‏47 KB

    Maybe your vision has problem in your PC.
    Try to uninstall NI SW & reinstall.
    or 
    install into another clean PC.
    Thanks

  • Labview Vision Assistant Histogramm Problem

    Good Morning,
    i sadly with the Vision Package from Labview. I want to load a picture and interpretation a detail of the picture with black, white and grayvalve. This can i make with the Vision Assistant feature Histogramm. When i build it (the VI with Vision Assistant) comes nothing to the output Histogramm. I have checked the output line Histogramm, but nothing goes out of this.What are my failure?
    Thanks a Lot
    german translation:
    Hallo zusammen,
    ich bin leider noch nicht richtig mit dem Vision Paket vertraut. Ich möchte die Schwarz, Weiß und Grauwerte eines Bildausschnittes auslesen und diese auswerten können. Dank des Vision Assistant, kann ich dies per Mausklicks erzeugen, jedoch erhalte ich am Ende des Vis keine Daten. Selbst wenn ich die Quelle verbinde und ein Anzeigeelement hinzufüge, kommt hinten nichts raus. Was ist mein Fehler?
    Vielen Dank
    -motecpam
    Labview Vision 2011 Servicepack 1
    with Vision Development Modul
    Attachments:
    Screenshot_VI.jpg ‏230 KB
    Screenshot_Vision_Assistant.jpg ‏313 KB

    Hallo Tropper,
    vielen Dank erstmal für die Antwort.
    Leider erhalte ich dennoch keine Ausgabe am Histogramm.
    Das Histogramm wird mir leer angezeigt.
    Ich habe es so erstellt, wie von dir gezeigt.
    Attachments:
    Unbenannt.jpg ‏117 KB

  • Make a Line Fit with Vision Assistant for a polynominal function?!

    Hello
    I do have following problem to solve: I do have a laser spot which draws a line (non linear) to a wall. For this line I need to know the (exact) mathematical
    function. Therefore I can get an image of the line but I do not know how I can extract the mathematical function with a line fit for example. If I could "convert"
    the line into points I would use the line fit function of LabView which should work without problem.
    Is there a way to solve the problem with the vision assistant or..?
    Thanks in advance
    Solved!
    Go to Solution.

    Hello,
    by now I have learned that (almost) anything is possible. You can achieve this using Labview, Matlab, C++, etc... In any case, getting the coordinates of a single laser line (single laser line - you don't need to find correspondences, as opposed to multi-line projection) should be really simple. If you place an apropriate filter in front of the camera, it is even simpler!
    If you want proof it can be done (and the description/procedute I used), check out the link in my signature and search for laser scanner posts (I think there is three of them, if I remember correctly). I have made a really cheap scanner (total cost was around 45 eur). The only problem is that it is not fully calibrated. If you want to make precise distance measurements, you need to calibrate it, for example using a body of known shape. There are quite a few calibration methods - search in papers online.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • How to train OCR using VISION ASSISTANT for multiple character recognition

    Sir I have tried training OCR using Vision Assistant for character recognition. For the process i have used a fixed focus camera but the character i had trained were undetectable. So sir please provide me a liable solution to the problem.
    Thank you.
    I have attached my project description and also the .vi file of my work towards it.
    Attachments:
    Project phase I.vi ‏138 KB
    WP_20140814_17_27_38_Pro.jpg ‏1444 KB

    Can you post a real jpg instead of renaming a bmp to jpg?

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • Vision Assistant 2014 (and DLL inside of LV) repeatedly crashes while reading multiline OCR

    I am working on an OCR application. With certain files (not all) the multiline function just crashes. No questions asked.  I set the ROI area to the targeted text and without fail it causes a crash.
    Is anyone aware of this problem, and if so is there a solution?
    Additional information:  My .abc file is from scratch with default settings and trained characters. Nothing in it should be causing the crash.  Images are simple scanned jpg files.
    Edit: I tested it. Crash still happens without any character set loaded, and no changes to threshold, size, or read options.

    I cut out an offending section of one of the files. It is attached.  To replicate the error:
    1. Load image
    2. Identification -> OCR/OCV
    3. New Character Set File...
    4. Select entire text.
    NI Vision Assistant 2014, 64 bit. 
    Edit: Posted this before I saw your request   On the "Full sized" image, it crashes always - even when it is called from inside of LabVIEW (the DLL crashes then).  It is how I noticed the bug - my program kept crashing.  Tor this image, it seems to only crash for me inside of the training assistant, but is again reliable.
    Attachments:
    crash.png ‏731 KB

  • NI Vision Builder vs. LabVIEW RT + NI Vision Assistant

    Hello
    I’m designing a vision application which should be able to run stand-alone (EVS-1463RT). The app will have up to 30 inspection states (read gauge value, checking for the presence specific objects, etc.) approximately. The second requirements are the communication ability with other devices not NI via TCP/IP and logging pictures in FTP.
    Now I’m thinking about the two possible solutions.
           Create an AI with NI Vision Builder
           Create LabVIEW RT app and with using NI Vision Assistant build inspection states
    A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited. In the other hand the building the inspection is “easy”.
    The second solution has following advantages for me: better control of the app, maybe better possibility to optimize the code, implementation my own TCP/IP server. My biggest concern is the using NI Vision Assistant to generate the inspection states.
    In conclusion I have to say that I’m not experience in the vision app but the LV RT is no problem for me.
    Thanks for any opinions
    Jan

    Hi Jan,
    > A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited.
    Could you give more feedback on this point? What do you mean by "using of the server is limited". More precise feedback and suggestions would help us improve the functionality.
    What I would recommend you look into is the use of the VBAI API. You can find examples in <Program Files>\National Instruments\<Vision Builder AI>\API Examples\LabVIEW Examples
    This features allows you to run VBAI inspection within your LabVIEW application, an retrieve results that you can send using a TCP implementation of your own in LabVIEW, without having to use the VBAI TCP functionality.
    You retain the configuration feature of the Vision part of the application, and can add extra code in LabVIEW.
    The API functions allow to basically open an VBAI inspection file, run it synchronously or asynchonously, and retrieve images and results.
    As you mentioned, the other solution is to implement your state diagram in LabVIEW and use the Vision Assistant Express VI in different states. What VBAI give you that Vision Assistant doesn't is the pass/fail limits for each step (and the state diagram).
    Best regards,
    Christophe

  • No Image - 1473R With Vision Assistant

    Hopefully this is a simple fix and I'm missing something very obvious, so here is what's up. I'm originally a text programmer but for this project I'm stuck using LabVIEW which is completely unfamiliar; I've been working on this for days with no progress so I thougt I'd see if anyone had some pointers. The goal of the project is to use the PCIe-1473R FPGA to do live gain control, overlay, and maybe some recognition.
    I started with the "Image Processing with Vision Assistant on FPGA" example and made a few simple changes to just attempt to get a video feed through. The camera we are using is a Pulnix TM 1325 CL, which outputs a 1 tap/10 bit Camera Link signal. Since this example VI is originally configured 1 tap/8 bit I changed the incoming pixel to be read as 1 tap/10 bit and compiled and tested. When I try to start video acquisition I get no errors but no frames are grabbed. The frame's acquisitioned count does not increase and nothing is displayed. If I switch to line scan I get a scanning static image, but this is not a line scan camera and my other NI frame grabber card shows an image from the camera fine.
    I wasn't all that surprised with this result, as the input is 10 bit and the acquisition FIFO and DMA FIFO are both 8 bit orginally. So, I changed them to U16 and also changed IMAQ FPGA FIFO to Pixel Bus and IMAQ FPGA Pixel Bus to FIFO blocks on either side of the Vision Assistant to U16. With this configuration, I again get no image at all; same results. I suspect this is because the incoming image is signed, so the types should be I16 instead. However, there is no setting for I16 on the Pixel Bus conversion methods. Am I not understanding the types happneing here or is there an alternative method for using Vision Assistant with signed images? I'd think it'd be odd not have support for signed input.
    Anyway, I've tried all the different combos of settings I can think of. Does anyone have any input? I feel like it must be either a buffer size problem or a signing problem, but I don't really know. Any and all input is welcome!
    Thanks for helping out a new guy,
    Kidron

    I ended up resolving this issue by switching cameras. The end goal was to use a FLIR SC6100, so I switched to it and was able to get things working shortly. The FLIR does use unsigned ints, so I believe that's what resovled the issue for anyone running into this in the future.

  • How performance meter in vision assistant calculates estimation time

    Hello...
    I want to know that how performance meter in vision assistant calculates the estimation time of each step?????
    I mean what is the concept behind this???
    please i need to know...
    Thanks in advance..

    Hello,
    I have performed the same operations in Ni Vision Assistant and in LV (without OCR):
    The VI is attached.
    The values are very similar. Maybe I do not understand your problem correctly. Please explain, if I missed the point.
    Best regards,
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
    Attachments:
    time Estimation_2010.vi ‏22 KB
    NIVISION.png ‏150 KB

  • Vision assistant in a while loop

    Hello gentlemans, i'm quite new to LabView and Vision assistant. My goal is to detect a car plate from a color image. I succesfully find the ROI by using the two blue rectangles at the edge of italian's car plates, and it works. But then I have to OCR the actual plate, I can do it with success but some pictures require a certain Threshold, others another... So I though, i can run the Vision assistant into a while loop to try different thresholds, when I got a 'valid' plate (7 digits) I simply quit the loop. The parameters are defined by clusters, but it looks like is using always the FIRST value, regardless of the changing threshold. index goes from 0 to 20, and my intention is to have the threshold ranging from 0..100, 0..105, 0..110 and so on. Can you spot something wrong in the attached VI image?? Thanks a lot for any help! Mauro Cucco
    Attachments:
    vi_ocr.png ‏31 KB

    Hi Octopus
    I belive that this problem originates in that the Vision Assistant handles the images references.
    In normal LabVIEW programming every time you have a wire branch or a SubVI new copies of the data is created. For images this is not the case because an image often consume a lot of space in the memory. So for an Image LabVIEW edits the image in the  memory instead of creating a copy.
    In this case your algorithm works the first time, but then the second time it will try to do the OCR on the last result of the OCR rather than the original image.
    To avoid this you need to use the function IMAQ copy, to create a copy of the original image before sending it to the OCR function. See below:
    Best Regards
    Anders Rohde
    Applications Engineer
    National Instruments Denmark

  • Vision assistant

    i have problem with my IMAQ driver. there are no vision assistant tools in my IMAQ.

    It looks like you only installed Vision Acquisition Software, which contains the drivers to acquire images from different buses and NI Acquisition Devices.
    To have access to the image processing library, you need to install Vision Development Module.
    Check out http://www.ni.com/vision/software/ to learn the differences between the 2 packages.
    Hope this helps.
    Christophe

  • Vision Assistant Password

    Hallo,
    ich benutze LabView 2011 in Verbindung mit Vision Assistant. Ich möchte das Bild von einer Intelligenten Kamera (Sensopart Visor V10) gerne über Ethernet mittels Vision Assistant in LabView einlesen.
    Jedoch habe ich das Problem, dass ich diese Kamera im Vision Assistant nicht ansprechen kann. Nach Eingabe der IP-Adresse wird ein Passwort gefordert, welches aber nicht vergeben wurde.
    Gibt es ein Standart-Passwort, wechles ich benutzen muss?
    Hello,
    I'm using LabView 2011 with Vision Assistand. I want to read in the image of a smart camera (Sensopart Visor V10) by using Ethernet and Vision Assistant.
    However, I have a problem that I can't access this camera in Vision Assistant. After entering the IP address, a password is required, which was not awarded.
    Is there a standard password, wich I have to use?

    Hi,
    see related post "Vision Assistant Password":
    http://forums.ni.com/t5/Machine-Vision/Vision-Assistant-Password/m-p/1987627
    Best regards
    Suse
    Certified LabVIEW Developer (CLD)

  • Why do the results between the OCR in the Vision Assistant and OCR in a VI generated from the vision assistant might defer?

    Hi everybody,
    I had a problem while I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting on the image.
    I decided to run the test on the vision assistant and it worked perfectly: I got the expected results.#
    I assumed then that my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image, just to verify.I did not change anything on the VI (all parameters are the same), and I used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot! I have checked alll possible configurable parameters and they have the same values as in the Vision assistant. Does anybody have an idea about what could be the reason of this behaviour?  Thanks
    Regards,
    Esteban

    Esteban,
    do you use the same images for OCR? Or do you take a new snap shot before running the OCR?
    What steps of debugging did you use in LV?
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

Maybe you are looking for

  • How to monitor OSD TS failure and get alert or report?

    How to monitor OSD TS failure and get some alert or generate an report? We have SCCM 2012 R2 CU1 with MDT 2013. I have checked the OSD reports in SCCM and everyone is emty, where do SCCM get data to the what to activate? (I thinking of "Task Sequence

  • My mac keeps restarting

    This is my first time  posting on this fourm. I have a 2011 Imac which came installed with snow leopard. since then i have upgraded to lion and then mountain lion. I think the problem began after upgrading to lion , but its been a while and im not re

  • Camera Raw not launching from within Bridge

    When I click the iris button to start camera raw from Br, I get the following message- "Bridges parent application is not active_ Bridge requires that a qualifying product has been launched at leat once to enable this feature" I have launcched bridge

  • Oracle Apps EUL with Custom Discoverer Views

    Disco Environment: OracleBI Discoverer 10g (10.1.2.2) Oracle Business Intelligence Discoverer Plus 10g (10.1.2.54.25) Discoverer Model - 10.1.2.54.25 Discoverer Server - 10.1.2.54.25 End User Layer - 5.1.1.0.0.0 End User Layer Library - 10.1.2.54.25

  • Safari 6.01 won't open PDF's after installing Flash

    Safari has always displayed PDF's downloaded via web links.  This afternoon I installed the Flash pluggin and now Safari downloads PDF's rather than displaying them. I checked HDD/Library/Internet Plug-Ins and there are no Adobe or other PDF pluggins