Edge detection on a moving object

Hi
I have a problem on edge detection. I have a pulley of different types.
Pulleys where Dia is same, but height differs. Pulleys of different dia but the number of teeth (ridges) vary.
I need to identify one type of pulley from the other. I am trying to use the following logic:
1. Locate the base of the pulley (which is distinct) using pattern match
2. Define a co ordinate based on this pattern match.
3. Define edge detection tool using the co ordinate (this is where I am running into a wall)
I have used extracts of examples : battery inspection, gage and fuse inspection
I am not able to define the edge tool (Edge detector under vision assistant 7.1)
I am trying to use the co ordinates, since if the pulley moves a bit, then the edge detector appears away (in Vision assistant)
THE CATCH IS:
I have to do this in VB since Machine vision has to be integrated into existing vb application.
NOTE: attached image of pulley
Now can some one help me PLS
Thanks in advance
Suresh
Attachments:
pulley.png ‏13 KB

Hi Suresh -
I took your image and expanded the background region to make three versions with the pulley in different positions.  Then I loaded the three images into Vision Assistant and built a script that finds the teeth of the pulley.  Although Vision Assistant can't generate coordinate systems, I used edge detection algorithms to define a placeholder where the coordinate system code should be added.
I chose to use a clamp and midpoint instead of the Pattern Matching tool because of the nature of the image.  Silhouetted images are difficult to pattern match, and the vertical line symmetry of the pulley makes it even more difficult to find a unique area of the object that can be recognized repeatedly.  Instead, I generally find more success using algorithms that are designed around edge detection.  I assumed that the "notch" at the bottom of the pulley will always be present in your images, so I performed a Clamp in the Horizontal Min Caliper to find this notch.  The midpoint of this clamp section can be used as the origin of a coordinate system around which the following Edge Detection steps are performed.  This allows you to find the teeth of the pulley no matter where it is in the image.  (Note that the VA script was built using pulley with BG 3.png.)
The Builder File that was generated from the script gives you all the code you need except for the Caliper function and the Coordinate System functions.  I added comments to these sections to show what type of code you will have to add there.
It may not be exactly the application you need to perform, but it should give you a solid starting point.  I hope it helps.
David Staab, CLA
Staff Systems Engineer
National Instruments
Attachments:
Pulley Teeth.zip ‏18 KB

Similar Messages

  • Interlace problems with moving objects using iDVD

    I had an MP4 file (created by a 3rd party) from a Hi-8 analog tape which has some interlace artifacts on moving images (left image of boy) but not too bad. When the MP4 files was imported into iMovie 11 the interlace artifacts smoothed somewhat - that was OK (right image of boy). The camera was still and the boy was moving.  Vertical lines on stationary objects are OK in all images. These are screen captures from the Mac of the mp4 played through quicktime and the same file imported into iMovie 11 and played.  I paused both to take the screen capture.
    The completed project in iMovie 11 looked OK when previewed prior to rendering. These are 20 year old videos so my expections were being met.   I rendered the project with iDVD to the hard drive first and then to a DVD with the same poor imaging result on the moving object.  I am using a new Macbook Pro I bought in early January which came with iMovie 11 and iDVD  Ver 7.1.2 (1158). Running Mac OS X 10.7.3  Macbook Pro  2.3 Ghz Corei7   8GB 1333 Mhz DDR3.
    I couldn't screen capture from the MAC DVD player screen to illustrate the poor result (got a checkerboard screen) so I took a photo of the screen and imported that (below).  The moving boy on the left is from an  iMovie 11 screen capture, the image on the right is the moving boy from the rendered DVD I paused on the Mac (and took a pic of).
    Below a close up of the poorly rendered moving boy viewed on the resultant DVD.  This translates into a horrible rendition of any quick moving object.  It happens with any moving image - i.e. a pan across a room with straight vertical lines like edges of a wall will show as serrated and poorly rendered edge. I used a trial version of the Daniusoft DVD creator with the same result!  I am at a loss on how to resolve this issue.  Any thoughts out there??
    I had previously used Pinnacle Studios on my old XP PC which worked great on other tape's Mpeg files and created great DVD's (never had an interlace problem) ... until my computer died .....  so I figured Apple and associated software should be at least equal if not a superior product.   Now I'm not too sure!

    I had an MP4 file (created by a 3rd party) from a Hi-8 analog tape which has some interlace artifacts on moving images (left image of boy) but not too bad. When the MP4 files was imported into iMovie 11 the interlace artifacts smoothed somewhat - that was OK (right image of boy). The camera was still and the boy was moving.  Vertical lines on stationary objects are OK in all images. These are screen captures from the Mac of the mp4 played through quicktime and the same file imported into iMovie 11 and played.  I paused both to take the screen capture.
    The completed project in iMovie 11 looked OK when previewed prior to rendering. These are 20 year old videos so my expections were being met.   I rendered the project with iDVD to the hard drive first and then to a DVD with the same poor imaging result on the moving object.  I am using a new Macbook Pro I bought in early January which came with iMovie 11 and iDVD  Ver 7.1.2 (1158). Running Mac OS X 10.7.3  Macbook Pro  2.3 Ghz Corei7   8GB 1333 Mhz DDR3.
    I couldn't screen capture from the MAC DVD player screen to illustrate the poor result (got a checkerboard screen) so I took a photo of the screen and imported that (below).  The moving boy on the left is from an  iMovie 11 screen capture, the image on the right is the moving boy from the rendered DVD I paused on the Mac (and took a pic of).
    Below a close up of the poorly rendered moving boy viewed on the resultant DVD.  This translates into a horrible rendition of any quick moving object.  It happens with any moving image - i.e. a pan across a room with straight vertical lines like edges of a wall will show as serrated and poorly rendered edge. I used a trial version of the Daniusoft DVD creator with the same result!  I am at a loss on how to resolve this issue.  Any thoughts out there??
    I had previously used Pinnacle Studios on my old XP PC which worked great on other tape's Mpeg files and created great DVD's (never had an interlace problem) ... until my computer died .....  so I figured Apple and associated software should be at least equal if not a superior product.   Now I'm not too sure!

  • How do you track a moving object using Labview and Vision Assistant

    I am using Vision and Labview to create a program that tracks and follow a moving object using a high end camera. Basically what it does is it detects a foreign object and locks on to it and follows it where ever it goes in a control sized room.
    I have no idea how to do this. Please help. Or is there an available example.
    Thanks.

    Hello,
    It sounds like you want to look into a Vision technique called Pattern Matching.  Using our Vision tools, you can look for a image, called a template, within another image.  Vision will scan over the entire image of interest trying to see if there are any matches with the template.  It will return the number of matches and their coordinates within the image of interest.  You would take a picture of the object and use it as the template to search for.  Then, take a picture of the entire room and use pattern matching to determine at what coordinates that template is found in the picture.  Doing this multiple times, you can track the movement of the object as it moves throughout the room.  If you have a motion system that will have to move the camera for you, it will complicate matters very much, but would still be possible to do.  You would have to have a feedback loop that, depending on where the object is located, adjusts the angle of the camera appropriately.
    There are a number of different examples a that perform pattern matching.  There are three available in the example finder.  In LabVIEW, navigate to "Help » Find Examples".  On the "Browse" tab, browse according to "Directory Structure".  Navigate to "Vision » 2. Functions".  There are examples for "Pattern Matching", "Color Pattern Matching", and "Geometric Matching".  There are also dozens of pattern matching documents and example programs on our website.  From the homepage at www.ni.com, you can search in the top-right corner the entire site for the keywords, "pattern matching". 
    If you have Vision Assistant, you can use this to set up the pattern matching sequence.  When it is complete and customized to your liking, you can convert that into LabVIEW code by navigating to "Tools » Create LabVIEW VI..."  This is probably the easiest way to customize any type of vision application in general.
    I hope this helps you get started.  Take care and good luck!
    Regards,Aaron B.
    Applications Engineering
    National Instruments

  • How do I pull up the left control panel that allows you to move from text to moving objects on page?

    How do I pull up the left control panel that allows you to move from text to moving objects on page?

    Do you mean this one:
    If so, go to the Window menu and make sure that Tools is checked.

  • Positioning moving objects

    I'm getting ready to make a 2d game and wasn't sure how to draw moving objects on the screen. I've played around in the past and always had a fixed sized window and basically moved objects around on a pixel grid. I want to have a resizeable window but keep the relative size and movements of objects the same. How is this usually done? Should I just render it to the largest possible window size and then transform it to the appropriate size? Thanks in advance!

    jgould wrote:
    I'm getting ready to make a 2d game and wasn't sure how to draw moving objects on the screen. I've played around in the past and always had a fixed sized window and basically moved objects around on a pixel grid. I want to have a resizeable window but keep the relative size and movements of objects the same. How is this usually done? Should I just render it to the largest possible window size and then transform it to the appropriate size? Thanks in advance!Not sure what that means. Basically you have a couple options:
    1- Store locations by ratios, not exact coordinates. So if you want a GameObject's location to be in the direct middle of the screen, its position would be (.5, .5). Then when you go to draw that GameObject, you have to convert that position to screen coordinates. You do this simply by multiplying the ratio-coordinates by the game window's width or height.
    2- Store locations by some model coordinates. This is similar to the first method, but requires some extra translation. You would store the GameObject's location via coordinates such as (50, 50), but you would also have to keep track of how large your game window model was (for simplicity's sake, let's say it is 200 by 100):
    drawPoint( (gameObject.getX() / 200) * gamePanel.getWidth(), (gameObject.getY() / 100) * gamePanel.getHeight() );Does that make any more sense?
    Edit- I should say that there probably are a ton of other ways to do it, these are just the ways that most easily fit into my head. It's all just algebra though.
    Edited by: kevinaworkman on Nov 13, 2009 2:54 PM

  • How do you filter out moving objects in a movie?

    Hello!
    I'v searched the internet for solutions to this but I cann't find any.
    Im prittie new and I need to make a house on fire while two guys (one is me)
    will be walking away from it, towards the camera.
    I want to filter out these two guys and only apply fire and flames to the house itself!
    To do this I used Time Diffrence and tried to isolate the guy using Fast Blur and Luma Key
    but Im prittie lost.. Nothing works and I really need guidence
    Do you know a trick to use?
    I cant do greenscreening as the green background would cost a fortune..
    But isn't there anyway to filter out moving objects?

    I'm not sure that you totally get the process. This kind of shot requires at least 3 elements or plates. These three plates become three layers in After Effects. The bottom layer would be the Background Plate which is a shot of the house without any actors in the shot. The next plate would be the Effect Plate or Fire Plate which could be any shot you can get or generate through plug-ins of flames. The top layer would be your Foreground Plate or your two actors.
    Now that you know what you need you can plan your shoot. The background plate is easy. Just set the camera on a tripod and lock it off. NO zooming. No panning, No change of focus. The foreground plate is also easy. You don't have to green screen the entire set, all you need to do is to put a green screen behind your actors. I do this kind of thing all the time using a 6' X 6' (sometimes smaller) green screen (or blue or red or purple) made of fabric and stretched over a frame that I made from 1/2" EMT electrical conduit and 4 90º elbow connectors for less than $20 at Home Depot or Lowes. You get a couple of assistants to walk behind your actors holding the green screen behind them. If you need to include their feet they can be easily rotoscoped out later and you can garbage matte everything else. If it's impossible to green screen the shot then you are stuck with roto, but that's much easier in CS5 using the roto brush. If you use your show with the actors as the background plate roto is even easier because you can hide a multitude of sins by simply feathering the roto and letting the fire light wrap around the actors a bit.
    The fire plate can be purchased footage, or you could shoot a real fire against a black background, or you could create the flames using AE or even a 3D app like Blender (free) or a bunch of other apps.
    Making the shot look real requires a few tricks once you have your clean plates. Fire creates light so you've got to add lighting effects to the background plate to sell the fire. You've also got to wrap light around your actors. Look for tutorials on light wrap for this. You can really help sell the effect if you place some flickering light sources behind and to the side of your actors when you shoot the foreground (actors) plate.
    Once you get a handle on these techniques it's fairly easy to make the transition to pulling of this kind of a shot with a hand held camera (now you have to learn motion tracking). On a project I recently completed we had to put a fire in a prop fireplace while actors moved around bringing milk and cookies for Santa. It was all roto and the fire was shot at night with nothing more than a black cloth (Duvetyne) behind the fireplace grate. The final composite was 10 layers and was so totally believable that no one questions the shot.
    The layers were from top to bottom, reflection in grandmas glasses, light wrap, actors (roto of original background plate), fire flicker layer, color mask layer (more orange around fire, fire glow layer, fire layer in Add mode, second fire layer in screen mode, third fire layer Luma keyed, and finally the original plate which contained all the actors, carefully lit with flickering firelight simulated by dangling a stick with fabric cut into 1/2 inch wide strips about one foot long in front of a orange gelled light behind and to the right of  the actors.
    If I get a chance I'll post a screenshot.
    Here you go:
    Light Wrap:
    The Movie
    Part of the original render before color grading.
    Enjoy

  • Blur moving object in AE CS6

    Hi:
    I need to blur a moving object:
    - The effect is only needed at some point of the video, it must appear, and then disappear, not being on screen the whole time.
    - The object is moving, and changing shape, the blur must adapt to it.
    - I've been trying to use the "Track Motion" option with Position & Rotation, then the "Analyze Forward" button, but the thing makes a mess all over, it's not following the object at all, then if You try to adjust it manually, it's only done in 1 frame, in the other frames the thing is off the object, then the "Tracking" stays on screen.
    Thanks in advance.

    Are you saying that you have something in some video that you have shot that you want blurred out? If so, you could create an adjustment layer with a mask on it over the object you want to blur and then just animate the mask over time. Now, you could use motion tracking to help that mask follow the object, but it sounds like you are having trouble with getting it to work properly. Try the tips here for doing motion tracking correctly. Alternatively, you could follow the tips on rotoscoping to make the mask animation go more quickly. Also, if you have After Effects version 12 (also known as AE CC), you can use the new mask tracker feature.

  • Lines on moving objects

    When I export DV clips from FCE I get lines appear on moving objects, (HDTV). When I export the same clip using iMovie08 it all looks smooth.
    I've tried the de-interlace, it made it worse.
    How can I get rid of the lines in FCE?

    shuggyboy1 wrote:
    ... What is the _best setting to export_ from imovie HD 6 to idvd to prevent this yet still maintain DV quality?
    .. not to export at all, that simple..
    the zillions of Export options could cause trouble.
    simply, store your iM projects in the 'Movies' folder of your Mac ...
    iDVD will 'find' them automatically and cares for itself to import..
    Plan B)
    drag'n drop the whole project from Finder to iDVD.. again: no export involved, no trouble..

  • Use of edge detection in pattern matching algorithm?

    Hello all,
                    I work for a group at Texas A&M University researching two-phase flow in reactors.  We have been using IMAQ Vision and had a question regarding the use of edge detection in the pattern matching algorithm.  I had seen the webcast entitled “Algorithms that Learn: The Sum and Substance of Pattern Matching and OCR” (http://zone.ni.com/wv/app/doc/p/id/wv-705) and in the webcast it was mentioned that the pattern matching algorithm uses edge detection to, (as best I can tell), reduce the candidate list further and to perform subpixel location calculations.  However, I was wondering if this edge detection process is still performed if we do not use the subpixel location calculation (i.e. if we uncheck the “Subpixel Accuracy” check box)?  Also, if edge detection is performed in the pattern matching algorithm is it consistent with the method described in Chapter 13 of the Vison Concepts Manual (“Geometric Matching”)?  Finally, if edge detection is performed in a manner consistent with Chapter 13 of the manual, how does the geometric matching correlation number affect the correlation calculation that was performed in the previous steps?  Are they simply multiplied together?
    Many thanks!
      -Aaron

    Jeff,
    We are using Imaq Vision Builder 4, with the included pattern matching that can be accessed via the menus (i.e. we haven't created a custom VI or anything.)  We are using the software to locate bubbles during boiling experiments and want a deeper understanding of what is going on "behind the scenes" of the algorithm, as we may have to explain how it works later.  We have been able to determine most of what we need from the webcast I had previously mentioned, except for the use of edge detection in the pattern matching algorithm.
    At the scales involved in our experiments, subpixel accuracy is really not needed and therefore we do not use it.  If edge detection is used in the pattern matching algorithm only to determine location with subpixel accuracy, then we do not really need to know how it works because we do not use that calculation.  Inversely, of course, if edge detection is used during pattern matching even without enabling subpixel accuracy, then we would like to have a fairly good understanding of the process.
    I've read most of the section on geometric matching in the Vision Concepts Manual and wondered if the process described there for edge detection (or feature matching) was also used in the basic pattern matching algorithm?
    To summarize, if edge detection is not used in the basic pattern matching algorithm without subpixel accuracy, then that is all I need to know.  If edge detection is used for pattern matching even without using the subpixel accuracy calculation, then we would like to learn more about how exactly it is used in the pattern matching algorithm.
    We would really appreciate any help you could give us... we've been digging around on the NI website for a couple of weeks now trying to fit together all the pieces of the pattern matching puzzle.
    Many thanks!
        Aaron

  • When I render and Export there are these little lines around moving objects

    When I render and Export there are these little lines around moving objects, what are they and how do I fix it?

    The lines are caused by the video being interlaced.
    If you export to DVD they should no longer appear. However, if they don't disappear you can use the Deinterlace filter on the footage whilst in FCE.
    How are you exporting ..... QT Movie or QT Conversion ....... and what are you exporting to - the internet or DVD?

  • Video titling behind moving objects

    i recently saw a titling effect in a video, in which a motorcycle drove across the screen and as it passed a title appeared behind it. how does one create video titling behind moving objects? i have no experience in animation, so i'm guessing that the video layer is duplicated and the top layer masked, frame by frame? would an 8-point garbage matte work?

    You won't have an easy time doing this in FCP, but Motion, After Effects, or ultimately Shake, can do this a lot easier. You have to rotoscope the motorcycle, accounting for the rims, and space between the engine parts, or anywhere else where you would logically see through to the background, then duplicate that layer on top of the original, with the text in between the two layers. Depending on how long the clip is, you're in for a long haul. But if the footage is anything but DV25, clip analysis and automated keyframing might serve you well as a starting point after the initial mask is made.

  • Explanation of Edge Detection of Digital Images

    Can anyone suggest the best links for the complete explanation of doing edge detection using java.

    http://en.wikipedia.org/wiki/Edge_detection
    http://forum.java.sun.com/post!reply.jspa?messageID=4371954
    If you have specific questions regarding implementing what you learn we will try to help you with those.
    regards

  • Highlight a moving object

    Using AE CS5, how do I highlight a moving object in a scene?  I want an ellipse shape of normal brightness highlighting the moving object.  I want the rest of the scene to be around 60% brightness.
    Thanks

    These videos show some approaches to that kind of result:
    http://tv.adobe.com/watch/learn-by-video/adobe-after-effects-cs5-motion-tracking-and-rotos coping/
    http://www.video2brain.com/en/videos-1458.htm
    Start here to learn After Effects: http://adobe.ly/bjBT3P

  • Edge detection using IMAQ Find Edge/IMAQ Edge Tool 3

    Hi,
    I have several images with useless background around a rectangular ROI (coordinates unknown!). So I tried using the two VIs mentioned above in order to detect these edges so that I can remove them. Regretfully, this does not work as planned.
    IMAQ Find Edge usually finds an edge, but not where it should be. The edge detection is earlier than I want it to be.
    IMAQ Edge Tool 3 sometimes does not find an edge at all, sometimes it finds the edge perfectly. Here I use the 'get best edge' option, which delivers the best results with all the images I tested it with.
    All the other options are also not changed while running the VI with the images I have.
    Does anyone have intimate knowledge of these VIs' algorithms, how they work, how they can be manipulated, ... ?

    Hi,
    Can you upload an example image?
    That would clarify what you're trying to do?
    Most of the time a change of mindset solves the problem.
    Kind regards,
    - Bjorn -
    Have fun using LabVIEW... and if you like my answer, please pay me back in Kudo's
    LabVIEW 5.1 - LabVIEW 2012

  • Edge Detection

    I'm about to start working on an application that receives images from a wireless webcam (attached to a roving robot) and needs to process them to decide where the robot should travel to next. The processing needs to identify obstacles, walls, and doorways and guide the robot through a door into another room.
    I know I'm going to need to use edge detection on the images, but don't know the best place to start. Any ideas? Also, if anyone has any experience with this, any idea what kind of performance I should expect? Given the limitations of the robot's movement, I will not need to process more than 10 frames per second. Assuming decent processing power, is 10 fps doable? 5?
    Thanks in advance...

    Edge Detection is basically a convolution operation. An image is simply a matrix of pixels. This matrix may be convoluted with another small matrix of values called a kernel.
    // Loading Image
    String imageName = "image.jpg";
    Canvas c = new Canvas();
    Image image = c.getToolkit().getImage(imageName);
    MediaTracker waitForIt = new MediaTracker(c);
    waitForIt.addImage(image, 1);
    try { waitForIt.waitForID(1); }
    catch (InterruptedException ie) { }
    // Buffering Image
    BufferedImage src = new BufferedImage(
        image.getWidth(c), image.getHeight(c),
        BufferedImage.TYPE_INT_RGB);
    Graphics2D big = bi.createGraphics();
    big.drawImage(image,0,0,c);
    //Edge Detection
    float[] values = {
        0f, -1f, 0f,
       -1f, 4f, -1f,
        0f, -1f, 0f
    Kernel k = new Kernel(3, 3, values);
    ConvolveOp cop = new ConvolveOp(kernel);
    BufferedImage dest = null;
    dest = cop.filter(src, dest);Play around with the values in the kernel to get a better idea of how this all works.

Maybe you are looking for