Color pattern match in consumer producer architecture

The way I currently have my code is as follows:
If the number of matches in a color pattern match is greater than 0, it sends the information to the consumer loop.
If it is not greater than 0, it doesn't send anything.
What I was wondering is, if it finds a match the first time and it sends the information to the consumer loop, will the number of matches in the color pattern match return to 0 or will it be greater than 0 after the first match? If the number of matches in the color pattern match will be greater than 0 after the first match, and it will still be greater than 0, even though it doesn't find a match in the following scenarios, how can I make the number of matches in the color pattern match return to 0?

Fernan1988 wrote:
... will the number of matches in the color pattern match return to 0 or will it be greater than 0 after the first match?...
it's hard to say unless you can post your code....have you tried probing tool,retain wire value,highlight execution,single stepping into and out, breakpoints?

Similar Messages

  • Color pattern matching is very slow

    Hi
    I tried this code creating one vi application.
    After the testing with USB webcam I have realized that the color pattern matching is very slow. How to increase the speed and to work smoothly in real time.
    Thank you

    Hello tiho,
    the color pattern matching is not as fast as 8-bit matching, but should still be fast.
    For example, I am attaching a VI for color pattern matching where you load the image, create the template and search do the matching.
    In my example I tried color pattern matching on color image of size 4288x2848 pixels and the matching is performed in ~140 ms (~7Hz). So, for a smaller image, I think the real-time processing is quite achievable (I consider real-time 20 Hz or more). The only problem is the template learning, which in my case takes around 10 seconds. But you should learn the template only once in the initialization stage.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
    Attachments:
    color matching.zip ‏49 KB

  • Slowness of color pattern matching vi

    Hi,
    We're trying to implement a SLAM (simultaneous localization and mapping) algorithm in Labview based on kinect. In order to do so, the mapping robot should spot landmarks in real-time from the kinect image. We found the color pattern matching VI which does exactly what we need but the problem is that it's very slow (about 1.5 seconds for each image and total of 3 images each iteration). Is this the normal time for detecting landmarks in a kinect image? The robot can't map accurately when the iteration is so slow.
    Thanks,
    Rap Master
    Attachments:
    detect landmarks.vi ‏61 KB

    Given that the pattern matching VI is an IMAQ (Image Acquisition) VI, this question will be best served by posting in our Machine Vision forums.
    Blake C.
    Applications Engineer
    National Instruments
    www.ni.com/support

  • Problem while using color pattern matching

    Currently we are doing projects on real time object tracking where we found one doubt that irrespective of the object size whether this color pattern match works or not . My questions are as follows:
    1. Whether it is applicable for objects moving far . Because as it moves far, the size of the object decreases such that the color pattern matching is not working what will be solution since we must use color image
    2. What is the difference in using scale and  rotate invarient in color pattern matching
    3. How we can effectively decrease the ROI depending upon the object position as per below attached screen shots .
    we have removed boundary box values of X and Y coordinates at the four corner but we can't track as the object moves far away or we can't decrease the ROI as the object moves far.
    4. whether it is possible to see the value of particular pixel  in LABVIEW vision development module as we seen only the coordinate position . whether it is applicable to see particular pixel value. Guide us
    please, see the below screen shots and provide the solution how effectively decrease or increase  the ROI depending on objects position using color pattern match
    Attachments:
    problem in matching while object moves far.png ‏515 KB

    Hello,
    I have not been using the color pattern matching a lot (especially not in real-time). But since the pattern matching considers only small scale changes, you could try updating the color template every n-th iteration (depending on your setup and requirements). The major problem is the template size, since the color pattern matching tends to take quite a lot of time in learning the template. You would of course need to come up with some idea on how to change the subimage size, where the new template will be learned.
    This is the part of coarse (rough) object detection as was suggested by MoviJOHN. For example, if your object is distinctly red, you can extract the green channel from your rgb image and use threshold to roughly find the object and apply the new ROI - template.
    So:
    1. learn the template,
    2. use pattern matching with bounding rectangle (ROI) for the next couple of frames (you would need to experiment here where the detection fails -> how fast can you move the object away so that the detection fails),
    3. Before the detection fails -> rough object detection with some padded bounding rectangle (new ROI),
    4. Re-learn te template of new ROI and go back to 2.
    Again, the biggest issue is the template learning time - if you have a high resolution camera and the template is large, this won't satisfy your real-time application.
    You should set up the appropriate illumination first. The resolution is also important, since your object is moved back and forth (but the resolution will have a direct impact on the template learning time).
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Change a VI to a more sophisticated pattern (e.g. consumer/producer)

    Hi everybody!
    I have written a VI taking spectra from a triggered flash lamp and spectrometer. The whole systems works in a rotating device. So the triggering and exact timing is quite important. I finally got my Vi to work  but there are still some problems with the spectrometer. I think this is due to the fact that the spectrometer runs into a internal timeout but I don't know why. Furthermore there is still a lot of false triggering when changing to the next sample with a different delay.
    So it's really time to take a deep and think about my program. At the moment I have a flat sequence with some for loops defining the number of measurings as well as the while loops for the triggering...
    Well that's I want to do:
    1. All the necessary data comes in a cluster from a different vi
    2. Initialize: Create folders, calculate positions for the step motor and move it to the initial position, open spectrometer and so on
    3. Start the speed measuring and calculation of delays
    4. Measure the spectrometer dark current
    5. Move step motor to the first position
    6. Start retriggerable task for lamp and spectrometer (the triggering occurs in a subvi, whereas the delays are updated continuously by a reference) 
    7. Measure at step motor position first sample, second sample and so on
    8. Move stepmotor -> 7
    9. Stop retriggerable task for lamp and spectrometer
    10. Save data and wait for a specified time
    11. Back to step 5
    12. In the end: Close the spectrometer, stop the speed measurement and so on
    Furthermore the data has to be displayed:
    1. The actual scan
    2. Step motor position dependent absorption for the actual scan as well as the former scans (by option)
    So much for the theory. I thougt that maybe a produce/consumer patttern would fit but I have no idea how to realize this. Of course I have read a lot but I don't know how to do the triggering stuff and so on.
    I have attached my main vi that you can see how I deal with this at the moment. Of course all the subvis are missing but the working principle should be clear. Please let me know if If you need any more information!
    I want to do two things. First I want to separate the data aquisition from the display and file I/O, second the data aquisition has to be seperated from the lamp and spec triggering.
    Especially the second point seems to be very important because my spectrometer runs into some internal timeout (not because there is no trigger) after some minutes or hours (the exact time is not reproducible, but it does happen) and crashes my whole system  as I mentioned before.
    I don't know why but I hope so much that a more sophisticated vi pattern will solve this problem.
    May you help me with this?
    Thanks!
    Attachments:
    radial_scan_v5.10.vi ‏293 KB

    If the vi runs without user intervention, then simple Enum based state machine architecture can be used.
    1. All the necessary data comes in a cluster from a different vi
    State -1
    2. Initialize: Create folders, calculate positions for the step motor and move it to the initial position, open spectrometer and so on
    State -2 Initialise state
    3. Start the speed measuring and calculation of delays
    This can be a Dynamically launched vi ( and can be launched in State-2) where it waits for the notification from State -2 to start Acquisision.
    This vi will also stop acquisiion based on another notification from the main vi states
    State-3 -> Set start notification for this cont. running sub vi
    4. Measure the spectrometer dark current
    State -4
    5. Move step motor to the first position
    State -5
    6. Start retriggerable task for lamp and spectrometer (the triggering occurs in a subvi, whereas the delays are updated continuously by a reference)
    This can also be another dynamically launched subvi, that is launched in State -2 itself.
    State-6 -> Send Notification to start the Retriggerable task
    7. Measure at step motor position first sample, second sample and so on
    State -7.Measurement
    8. Move stepmotor -> 7
    9. Stop retriggerable task for lamp and spectrometer
    State -8 Send stop notification to stop the Retriggerable task.
    10. Save data and wait for a specified time
    State -9 - Log the Data
    11. Back to step 5
    State 10 .. Check for completion.if required again redirect to state -5
    12. In the end: Close the spectrometer, stop the speed measurement and so on
    State -11 Send stop ACQ notification for all the dynamically launched vis, stop those vis..close  all instrument refs .. stop main vi
    If the states are dependent on user intervention then Event based producer consumer can be used where, the producer will control when to start the whole process and when to stop the process( Using queues).The consumer can be Queue driven Enum based state machine

  • USB SNAP + colour pattern matching ( IMAQ)

    I am new to LAbView and currently  I am working on colour pattern matching with USB snap. My problem is this program keep prompt me on the IMAQ Learn Colour Pattern. I need someone to help me solve that problem.
    Another question is 'Are colour pattern matching require us to save and load file before learn a template?'
    I am using LabView 7.1 and I would appreciate anyone who help me. Thanks
    Message Edited by Chee Hou on 09-16-2009 03:46 AM
    Attachments:
    Untitled.vi ‏189 KB

    Hello Chee Hou,
    Have you tried to run the example to do color pattern matching?
    From the example, you do need to have a template picture ready when you are using the colour pattern matching.
    You need to Learn Colour Pattern and then load the pattern to do color pattern matching.
    Hope this helps.
    James
    - Meadow -
    LabVIEW 7.0 - 2011, Vision, RT, FPGA
    TestStand 3.0 - 4.5

  • Error -1074395395 occurred at IMAQ Match Color Pattern after merge programs

    Dear all
    I  have 2 separate programs that use same source image (red01a1.jpg). The first find location of color particles (color location 01.vi) and second find location of particles after threshold  color image (particle location 01.vi).  Both of them run good.  
    But when  2 programs were merged  into one program (color sorter 01i.vi), that happened error . The error message is “Error -1074395395 occurred at IMAQ Match Color Pattern”. How can I fix it ?
    Thanks  
    Solved!
    Go to Solution.
    Attachments:
    color sorter 01i.vi ‏126 KB
    Color -particle location.zip ‏86 KB
    red01a1.zip ‏86 KB

    Hi Xuan
    Check it out the attached vi
    Sasi.
    Certified LabVIEW Associate Developer
    If you can DREAM it, You can DO it - Walt Disney
    Attachments:
    color sorter 01i.vi ‏120 KB

  • Basic requirement for color histogram & pattern matching.

    I will do a student project about video & image(stored in PC) analysis using color histogram & pattern matching. I have LabVIEW6i only. What's the basic requirement for this analysis? vision module? Please suggest a simple and cheap requirement. Thanks all.

    It really depends what is the purpose of your student project. Is it about developing those routines yourself or is it about using ready made routines to research the possibilities and differences on different kind of pictures.
    In the first case you already have everything necessary as you can perfectly well develop those routines in normal LabVIEW. Otherwise you would want to look into the IMAQ Vision Toolkit from National Instruments or the IVision Toolkit from Hytek Automation http://www.hytekautomation.com/Products/IVision.html. You can download the IVision Toolkit free for evaluation it just will regularly show a nag screen to remember you that this is not a licensed version.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Is it a proper way to use queue for consumer/producer model?

    Hi all,
      I am following the example of consumer/producer model to use the queue to synchronize the following process: The producer is a loop to produce N numbers, I will put every generated number into an array and after every 5 numbers generated, I put the array into the queue and pass it to the consumer. I have to wait the consumer use up the data and it will then remove the element from queue so the producer will get a chance to produce another 5 numbers. Since I set the maximum size of the queue to be ONE, I expect the producer and consumer take turns to produce / consume all five numbers and pass the chance to the other. Here is my code
    when the case box is false, the code will be
    For the first 5 numbers, the produce will generate every thing right and put that into the array, and it will pass the array to the quere so the consumer will get a chance to loop over the array. I except the procude's loop will continue only when the queue is available (i.e. all elements are removed), but it seems that once the consumer start the loop the produce 's loop will continue (so the indicator x+1 and x+2 will show numbers changed). But it is not what I want, I know there must be something wrong but I can't tell what is it.
    Solved!
    Go to Solution.

    dragondriver wrote:
    As you said in 1, the sequency structure enforcing the execution order, that's why I put it there, in this example, to put the issue simple, I replace the complete code with number increase, in the real case, the first +1 and +2 must be executed in that order.
    Mikeporter mentioned:
    1. Get rid of all the sequence structures. None of them are doing anything but enforcing an execution order that would be the same without them.
    So even if you remove the sequence structure, there will be a fixed & defined execution order and that is because LabVIEW follows DATA FLOW MODEL.
    Data Flow Model (specifically in context of LabVIEW): A block diagram node executes when it receives all required inputs. When a node executes, it produces output data and passes the data to the next node in the dataflow path. The movement of data through the nodes determines the execution order of the VIs and functions on the block diagram (Click here for reference).
    Now in your code, just removing the sequence structure will not make sure that the execution order will gonna remain same but you need to do few very small modifications (like pass the error wire through For loop, before it goes to 'Dequeue Element' node).
    Coming to the main topic: is it a proper way to use queue for consumer/producer model?
    The model you're using (and calling it as consumer/producer model) is way too deviated from the original consumer/producer model model.
    dragondriver wrote:
    For the second one, yes, it is my fault to remove that while. I actually start from the example of Producer/Consumer design pattern template, but I didn't pay attention to the while loop in the consumer part.
    While loops (both Producer & Consumer) are the essential part of this architecture and can't be removed. You may want to start your code again using standard template.
    I am not allergic to Kudos, in fact I love Kudos.
     Make your LabVIEW experience more CONVENIENT.

  • Gige + pattern matching

     Hi all,
               I am new to Labview image processing . 
               i need to achive pattern match. from that i need to find fiducail location . but here i am facing some problems like .
               1. aquiring image is corrupted like black lines (i am using GigE camera 5MP 2/3" ) . 
               2. here i am doing pattern match by live grab method ( like continous aquring) . so that my CPU uasge is reachs nearlly  90 %
               3. my task is to find fiducail using pattern match from that i need to assign ROI for main pattern  (is it possible to snap image -> pattern match - > result) 
                i have attached my vi code and image .. 
    Attachments:
    6.png ‏75 KB
    PM_TEST.vi ‏121 KB

    Hi please learn basics and then apply over...
    1)What are the black lines in image? you didn't provide any screenshot/corrupted image.
        -Are you getting images properly NI MAX?
        -Did you allow jumbo packets ?
    2)Please understand what is grab and snap by using built-in examples provides. And your cpu usage is not because of continuous acquisition.
       -I cannot comment on the cpu usage now, but sure it's not because of gab.
       -Use producer-consumer pattern to do parallel acquisition and processing.
    3)Once you understand the difference between grab and snap, this you'll understand automatically.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • Vision assistant steps to be followed for pattern matching

    I am acquiring color images of hands movement using web camera of laptop.
    I want to process the acquired images to use for pattern matching.
    What are the steps to be followed to achieve the above mentioned task.

    In the following we proceed to function block search pattern extracted in the previous process (the parameters as rotation angle and minimum score is inserted into SETTINGS control), extract the output of the search function to get the position values indicators that will be displayed on the front panel)
    Atom
    Certified LabVIEW Associate Developer

  • Error imaq color pattern

    hi, I'm trying to do a draft color pattern recognition, but labview gives me this error, I hope you can help me, let the files, thank you very much
    Error -1074395384 occurred at IMAQ Match Color Pattern
    Possible reason(s):
    IMAQ Vision: Invalid color template image.
    Attachments:
    pastillas1.png ‏55 KB
    pastillas2.jpg ‏42 KB
    Parámetros de Búsqueda Color.vi ‏78 KB

    Hello,
    the general idea is:
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Cannot get colors to match with epson aritsan printer

    Have been working with Epson tech support to get colors to match the CS5 colors to no avail. Have claibrated my monitor using Huey calibration unit and no matter what I try cannot get the correct colors. Any suggestions?

    off the VIvid setting on the printer
    Hi,
    I'm not familiar with the Epson print driver in Windows, but is there a choice for No Color management? If Photoshop runs the color conversion, there should be no CM on the printers part.
    In the first Print Dialog (Photoshop's) do you have the correct destination profile chosen? It should be Epson Lustre for that printer.  And what is your source space? You should Soft Proof to the Epson Lustre paper to get a better idea of how it will reproduce. (In Photoshop go View>Proof Setup and go Custom and find the device to simulate- Lustre paper for your printer. Now you can work in your editing space like Adobe RGB and soft proof to your destination device)
    Can you post a screen shot of the Photoshop Print dialog showing your settings?
    And one of the Epson Print settings dialog too?
    Thanks!
    Also, keep in mind the inexpensive all-in-one print/fax/scan/copy/cappuccino-makers do not necessarily produce the highest quality photos. I love my higher-end Epson printer (R2880) but I will not own an Epson all-in-one again. I have had nothing but trouble with inexpensive Epson inkjet printers. Completely junk IMO. (that said, I have not owned an Artisian 730)
    Have you done a nozzle check just to be sure there is no clogging?

  • How to use Colour pattern matching with a webcam

    Hi,
    I have a web cam which I am able to use successfully in labview (i.e.. Get images)
    I have looked at the colour pattern matching examples and tried to modify them, so that I can detect a red spot, that can be seen through the webcam, but have been unsuccessful. 
    In essence I'm trying to do real time colour pattern matching
    Can anyone steer me in the right direction? Or help me out?
    Thanks 
    Solved!
    Go to Solution.

    Hi kr121,
    I'm trying to work on color myself right now.
    What have you tried so far?  What type of web camera are you using?  I'm using a Microsoft Life Camera with LV 2011 on Windows 7.
    I started here:  http://zone.ni.com/devzone/cda/epd/p/id/5030
    If you are not using an NI camera I was able to get this to work using the cmd prompt and extracting the files manually to at least run the NI-IMAQ for USB: Snap and Save Image with USB Camera and NI-IMAQ for USB: Grab and Save Image with USB Camera examples.
    The command prompt command is:
     ni_imaq_usb_installer_86.exe /x
    Don't know if this is 100% correct but it at least allowed me to capture images and avi's.
    Regards,
    -SS

  • How do I set multiple pattern matching Vi's and make overlappin​g pattern matches to count as one?

    Hello! I'm a student and I'm currently making a project using pattern matching.
    My patterns are from chick foot/feet.
    I'm  created multiple pattern matching VI's to detect all the feet because I find it difficult/impossible to match all the feet with a single pattern/template.
    However, when using multiple pattern matching VI's some pattern matches detect the same foot, hence overlapping.
    So how can I make the overlaping pattern matches to be counted as one?
    Thank you in advance

    Thank you for replying Sir Zwired1.
    I'm still a newbie in using LabVIEW so pardon me if I can't understand fully
    The objective of my project is to detect all the feet through pattern matching and count the pattern matches made.
    "Keep a 2D array of counts, initialized to zero and the same size as your array of possible locations, and increment the value every time you get a match. If multiple pattern matching attempts result in a match a given location in your count array might be "3" but all you care about is if the number is greater than zero."
    I'm sorry, but how do you do this? BTW, I'm using vision assistant.

Maybe you are looking for

  • I need pictures from a previous backup!

    When I updated to iOS 5 all of my music would not work. Therefor I restored my iPod to a previous backup from almost a year ago. This previous backup didnt have a lot of my pictures so I wanted to restore to the newest back up but before I could do s

  • Oracle 8.1.7.4 on AIX 5.3

    Hello everyone. I have to set up an Oracle application on AIX 5.3 which works only with Oracle 8.1.7.4, but Oracle doesn't support 8th version anymore. I need to test the performance of application on new hardware platform. Does anyone know there can

  • Removing UI Loader from the stage

    I have 5 GoToframe activated UI loaders that appear on stage when a btn is clicked. All work fine except when either video galleries are clicked (there are two) . After the video gallery appears on stage any button clicked after that adds the next UI

  • Period factor : Depreciation posting

    In the depreciation simulation, I find a 'period factor' with some value 0.5656.  I dont get how this factor is derived by the ssytem.  My asset value is 250000 and the declining depreciation rate is 10% with a 99 useful life.

  • RecordStore confusion.

    Hi Everyone, My midlet creates new RecordStores if not already present, or opens it if it is already present and returns the object of RecordStore. I am maintaining these RecordStore objects in a hash table, Like...         RecordStore rs = null;