Edge Detection & Edge Coordinates

I need to do edge detection and get the edge coordinates from that operation. It's been awhile since I've done stuff like this so I'm wondering if maybe the JAI has this? If not, what should I look for in Java 2D (although at that point I'm writing from scratch right?)
I'm not asking for homework here, but I am very busy and would like to be pointed in the right direction quickly. Thus, the 10 dukes as compensation. :-)

anybody?

Similar Messages

  • I need to manipulate only X values of Edge coordinate​s (IMAQ Find Vertical Edge VI)

    In IMAQ Find Vertical Edge VI Edge Coordinates is an array of clusters, but I need to manipulate only the X values. How can I extrapolate those values to use them after?

    Thanks for your answer...I have begin to do it yesterday afternoon, but I don`t finish it yet !!!Isn`t there an other way of doing ? My problem is that I don`t know the exact number of points used by the edge coordinates, and before unbundling the cluster, I have to transform my array of clusters into a cluster, so I have to give it a fixed size...it`s not a difficult work, but quite long...

  • How to keep the first edge coordinate as reference?

    Hi..
    My program is to detect movement of an object using a webcam and Labview 8.5.1.
    I used an IMAQ RAKE with 'top to bottom' scan direction to find the first edge of my object and I want to keep the coordinate of this first edge (X1,Y1). Then when the object moves, the IMAQ RAKE will find the first edge with same scan direction and the coordinates (Xi,Yi) are needed to calculate the distance of the movement (X1-Xi,Y1-Yi). And this calculation is running in real-time.
    My problem is, when the IMAQ RAKE detect the very first edge (X1,Y1), how can I store this as a reference coordinate that I'm going to use in the calculation? Because in my current program, the coordinate will follow the current first edge detection, thus the distance will always be 0.
    Many thanks.
    Message Edited by Amila on 02-25-2009 05:43 PM

    Hi Amila,
    I assume that you are using the output of the function 'first edges'.  Depending on your architecture there are different ways to go about doing this.  From your problem I assume that you are usign a loop with the function on the inside meaning that on the second itteration the result of the first is being overwritten.  My suggestion would be to use a shift register so that the result from the first itteration can be fed back around into the second.
    If you need any help in implementing this please don't hesitate to ask.
    Regards,
    Thomas Clark
    Applications Engineer
    National Instruments UK & Ireland

  • Find object's edge coordinates in After Effects

    Hello,
    first of all, I'm sorry if the answer to my question is already posted somewhere … I'm a bit confused I didn't just find one somewhere in the web already—don't know if I perhaps look for the wrong terms or if it's really such a special question I have …
    Anyway, my question is: how can I get the coordinates of all four edges from objects (images) in a three dimensional composing in After Effects? And also, is there a way to see the actual coordinates of the camera that I have moved by several parented objects?
    In case, this might be of interest, here's a more detailed description of my situation: I have made a little three dimensional composing in After Effects with scaled and rotated images an a moving camera. I want to show the composing within a tablet app and my first plan was to render it and put the movie into the app, but now my programmer and I think it might be better to generate the composing directly within the app. So my programmer needs the said coordinates to rebuild the composing in his code (as far as I know, he uses Xcode and writes the app in Objective-C).
    Hope you guys can help me …
    Cheers!
    Theo
    (I use After Effects CC on Mac OS X 10.9.)

    Edges/ lines are determined by the coordinates of their end points - basic planar geometry. You can get those by using the  toWorld() method and filling in the corner point coordinates relative to the layer anchor point, e.g.:
    TopRight2D=[50,-50];
    TopRight3D=thisLayer.toWorld(TopRight2D)
    Easy as Pi (or pie if you prefer).
    Mylenium

  • Edge detection between the finiteelement mesh and a bounding box

    i have an finite element mesh ,i constucuted a bounding box which passes through the mesh ,so now i want to detect the edge of the rectangle which intersects the mesh and construct a new mesh which resides in the rectangular box.how to do this.any kind of help is appreciated.
    P /------------------\ Q
    A |---- |/---|-----|----|-----|--\--| B
    |----/|----|-----|----|-----|---\-|
    |---/-|----|-----|----|-----|----\|
    |--/--|----|-----|----|-----|---- |\
    D |-/---|----|-----|----|-----|---- | \ C
    S /-------------------------------------\ R
    suppose ABCD is FEMesh, PQRS is a bounding box,consider the edge PS when it intersect with the elements in the mesh i will get the pentagon element when i want to form the new mesh which contains the elements which are inside the box.the graphical represenation is not clear if you draw on a paper you can easily understand my problem.
    now how to create a new mesh from the elements which reside inside the boxin which the left hand side of PS will be discarded and RHS of QR edge is discarded,so now mesh will have some zigzag shape i think.so please help me out how to solve this problem.

    i think the figure is not clear , so i want to
    simplify the problem by considering only one element
    of the mesh and only one edge of the box.Okay.
    suppose ABCD
    is a 3D quadrilateral element and PQ is the edge
    passing through the element.i will pass a function
    with coordinates of ABCD and P (startpoint of
    edge)coordinate and its normal vector,if there is an
    intersection the function should return 2 elements
    according to intersection
    now we have element ABCD and an edge with P and its
    normal vector,now we have intersection between element
    and edge from p at X and Y, now the algorithm should
    return AXYD as one element and XBCY as second
    element.please help me out in solving this task.
    Thanks in advance.I'll assume you mean 2D, planar, linear quad. I think the general 3D problem would entail surfaces and be much more difficult. If you use higher order shape functions the 2D problem isn't trivial, because then you might have more than two intersections.
    It COULD return two points of intersection, but you'll have to be careful about special cases:
    (1) The edge could intersect just one of the four corners,
    (2) The edge could coincide with an edge of the element,
    (3) The edge could miss the element entirely and not intersect.
    You'll have to decide what to do in the degenerate cases. The sensible thing would be to do nothing and leave the element alone.
    You'll also be concerned about mesh quality when you're done. Near degenerate elements won't perform well when you analyze them. Will you try to correct these after the fact?
    What if you end up with triangular elements? An edge might slice off a triangle and leave you with a five-noded element. What will you do with those? Create two triangles to get back to a quad? It's not so easy. You really should look into quadtree meshing. There are more general ways to attack a problem like this.
    What kind of physics are you ultimately trying to capture? What are you really doing here?
    You've described the algorithm fairly well, even if you haven't thought it all the way through.
    What's the problem? You'll have to write the code. No one will do that for you. Write some and come back with specific questions if you have difficulties. This is a Java forum, after all.

  • How to Find the coordinates of blister

    hi
      i want to find the edge coordinates of blister of image attached
      Plz tell whether it can be calculated
    Regards
    Abhishek Verma
    Attachments:
    13.jpg ‏2305 KB

    Hi Abhishek Verma,
    please save the next time your picture as a real jpg. Do you have vision? Did you already start with something?
    Mike

  • Finding shape segemnts

    Hello
    I am currently working on a project called : Jigsaw solver where I load a jpeg picture of all the pieces of a jigsaw puzzle in an unmade state laid out on a plain white background. The image is analysed to isolate the images of each piece from the whole image.
    At the moment i have made simple shape as my image as a jpeg. I will only be dealing wiht black and white colours for now. Therefore, my image will be black on a white background. I ahve written an edge detection algorithm as posted below. It detects the edge of a shape iteratively and saves the edge coordinates in an arraylist called, boundary.
    I am successful at finding the edge of a shape.
    My current problem is I need to find the coordinates within my shapes edge boundary. I need to find the inner coordinates as segments and store the segments in another arraylist. Not sure how to retrieve the coordinates as segments ???
    Any help would be greatly appreciated :) many many thanks
    I have posted my code below:
    * FindEdge.java
    package jigsaw;
    import java.awt.image.*;
    import java.util.*;
    import java.io.*;
    import javax.swing.*;
    import javax.imageio.*;
    import java.awt.*;
    public class FindEdge
    public void detectEdge(JigSawPanel jp) // does the walking around the edge
    BufferedImage img = jp.getImage();
    int width = img.getWidth();
    int height = img.getHeight();
    int fromDir = 0;
    ArrayList<Point> boundary = new ArrayList<Point>();
    for (int x = 0 ; x < width ; x++)
    for (int y = 0 ; y < height ; y++)
    int pixColour = img.getRGB(x, y);
    Color rgbColour = new Color(pixColour);
    int val = getRGBValue(rgbColour);
    if (val < 80) // 80 is the threshold value that filters between black and white pixels... rgb val for white is 765 and for black its 0 but had to filter ou greys too
    // System.out.println (val); // so if its is a black pixel then do the following find its next black neighbouring edge
    Point current = new Point(x, y);
    Point startPoint = new Point(current); // to check whether we have come back to the starting point after walking around
    System.out.println("start point : " + startPoint);
    Point nextPos = new Point();
    do
    int toDir;
    for (toDir = 0 ; toDir < 6 ; toDir++)
    step(fromDir, current, toDir, nextPos);
    int nxtPix = img.getRGB(nextPos.x, nextPos.y); // getting rgb value of its neighbouring pixel
    Color nxtcol = new Color(nxtPix);
    int value = getRGBValue(nxtcol);
    if (value < 80)// checking if the neighbouring pixl is black
    //System.out.println("implements here 2");
    break;
    if (toDir == 6)
    System.exit(1);
    System.out.println(
    "from: " +
    fromDir +
    " current: " +
    current +
    " to dir: " +
    toDir +
    " nextPos: " +
    nextPos);
    // debugging: we know that pixel is an edge and turns the edge pixel to red... to check if correct edge has been detected
    //img.setRGB(nextPos.x, nextPos.y, Color.red.getRGB());
    boundary.add(new Point(current)); // adds that pixel coord into an arraylist
    //jp.repaint();
    //System.out.println("size: "+ boundary.size());
    fromDir = (fromDir + toDir + 6) % 8 ;
    current.x = nextPos.x;
    current.y = nextPos.y;
    } while (!startPoint.equals(current));
    System.out.println("end of scan!!!: found " + boundary.size() + " points on the boundary");
    for (Point pt : boundary)
    img.setRGB(pt.x, pt.y, Color.red.getRGB());
    jp.repaint();
    break;
    public void step(int fromDirection, Point currentPoint, int toDirection, Point nextPoint)
    switch (fromDirection)
    case 0: //different directions for checking where the next black edge pixel is
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y;
    break;
    case 3:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    break;
    case 1:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y;
    break;
    case 2:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y;
    break;
    break;
    case 2:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y;
    break;
    case 1:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y;
    break;
    case 5:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    break;
    case 3:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y;
    break;
    case 4:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y - 1;
    break;
    break;
    case 4:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y;
    break;
    case 3:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 5:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y;
    break;
    case 2:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y;
    break;
    break;
    case 6:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y ;
    break;
    case 1:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x ;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 4:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y ;
    break;
    case 5:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    break;
    case 7:
    switch (toDirection)
    case 0:
    nextPoint.x = currentPoint.x - 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 1:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 2:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y - 1;
    break;
    case 3:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y ;
    break;
    case 4:
    nextPoint.x = currentPoint.x + 1;
    nextPoint.y = currentPoint.y + 1;
    break;
    case 5:
    nextPoint.x = currentPoint.x;
    nextPoint.y = currentPoint.y + 1;
    break;
    break;
    // case 6:
    // if (checkEnd(currentPoint.x, currentPoint.y, startPoint))
    // break;
    public boolean checkEnd(int x, int y, Point startPoint)
    if (x == startPoint.x && y + 1 == startPoint.y)
    return true;
    else
    return false;
    public int getRGBValue(Color colPix) // gets the sum of red, green, blue colour value for a pixel
    int red = colPix.getRed();
    int green = colPix.getGreen();
    int blue = colPix.getBlue();
    int value = red + green + blue;
    return value;
    import java.awt.image.*;
    import java.util.*;
    import java.io.*;
    import javax.swing.*;
    import javax.imageio.*;
    import java.awt.*;
    public class JigSawPanel extends JPanel
    private BufferedImage image;
    /** Creates a new instance of JigSawPanel */
    public JigSawPanel()
    //file from location
    String imgFile= ("/home/aps/projectStudents/Nikita/sample.jpg");
    // try to load file to buffered image
    try
    image= ImageIO.read ( new File (imgFile));
    setPreferredSize(new Dimension(image.getWidth(), image.getHeight()));
    catch(IllegalArgumentException ie) // if file does not exist or corrupt
    System.exit(1);
    catch(IOException ie)
    System.exit(1);
    public void paintComponent(Graphics g)
    super.paintComponent(g);
    g.drawImage(image, 0, 0, this);
    public BufferedImage getImage()
    return image;
    import java.util.*;
    import java.io.*;
    import javax.swing.*;
    import javax.imageio.*;
    import java.awt.*;
    public class JigSawFrame extends JFrame
    /** Creates a new instance of Main */
    public JigSawFrame()
    JigSawPanel jp = new JigSawPanel();
    FindEdge fe= new FindEdge();
    int x = jp.getImage().getWidth(this); // get image width from jig saw panel
    int y = jp.getImage().getHeight(this); // get image height from jig saw panel
    jp.setPreferredSize(new Dimension (x, y)); //set panel size
    Container content= getContentPane();
    content.add(jp, BorderLayout.CENTER);
    pack();
    setVisible(true);
    show();
    fe.detectEdge(jp);
    public static void main(String[] args)
    JFrame frame= new JigSawFrame();
    }

    hey,
    package jigsaw;this doesn't make any sense here...since it's not given by the code. So what this jigsaw package does for your code or programme?

  • Max Table Heigh/Width

    Wokring with a CMS that pulls in an image into the site,
    specifically a nested table. Unless the image is sized perfectly
    when added, it distorts the site layout by pushing the table hieght
    or width out of it's intended size. Is there a line of code that
    will either resize the image being brought in, or that can lock a
    specific height and width to a table?
    Thanks!

    Hi GG,
    Thanks for that info, it helps :)
    What I'm trying to do is make a toolbar for the top of the
    screen. I'm using Zinc to make the projector window transparent
    which should clear up some questions you might already have.
    To do this, I would prefer to simply alter the Stage.width to
    be the resolution of the primary display. Flash doesn't support
    this. Zinc however, supports resizing flash by force, and it does
    what it says, however the coordinate system is a little gimpy when
    Zinc does it.
    In Zinc, the left and top edge coordinates of a resized
    document goes into the negatives. For example a 100x100 document in
    flash has a X/Y range of 0-100 on both X and Y. If I resize it to
    200x200 in Zinc, the coordinates go from -50 to 150, instead of 0
    to 200.
    Instead of this I wanted to use flashes nature against
    itself. That is, it auto-crops a document to screen resolution. So
    if I make a header bar larger or equal to the largest typical
    standard (1920), dual monitors aside, flash will auto-crop it for
    me, as long as I have noScale mode on.
    I have it working in this way. I detect how much I've been
    cropped, and then adjust the horizontal coordinates of things. It
    lines up perfectly. However I don't really like this way. At some
    point in the future flash may work differently and I'd like to
    consider that now.
    Anyone (especially Zinc users) have any idea on how to
    achieve this otherwise? Zinc lets me keep my headerbar "Always on
    Top" and such so it works and feels exactly like a real toolbar.
    Fear not, this is for a specific purpose for a specific
    friend who want something to help him quickly launch some programs.
    I'm not making one more toolbar for IE or something haha.. It's a
    good usage ;)
    Thanks!

  • How to measure on depth image

    I am trying to measure the area of an object that has been captured by a ToF camera. 
    The object is the round "sausage" like object. I am wondering how I should measure on the depth image?
    I also have an RGB image and originally I considered to detect the object there, and then detect the coordinates for the ROI, and then go and read the values on the depth image. 
    However due to poor lighting conditions that is a little difficult. 
    I added the VI that loads and displays this image and the file for this image (the Depth_info vi is a sub vi).
    Does anyone have a lead that could nodge me in the right direction?
    Thanks. 
    Attachments:
    Load info.vi ‏1053 KB
    3.txt ‏113 KB
    Depth_info.vi ‏24 KB

    Hello,
    the simplest way would be to detect the object on the texture image and extract the corresponding depth values (if your texture and depth images are aligned/calibrated). But you've got the problem with lighting conditions. Can you improve this to illuminate the object more? Can the countours of the object be extracted on the illuminated texture image?
    On the other hand, you could perform some sort of segmentation on the depth image to separate the measured object from the background and then calculate the area. Can you attach the X,Y,Z information of your scene?
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Edge detection on a moving object

    Hi
    I have a problem on edge detection. I have a pulley of different types.
    Pulleys where Dia is same, but height differs. Pulleys of different dia but the number of teeth (ridges) vary.
    I need to identify one type of pulley from the other. I am trying to use the following logic:
    1. Locate the base of the pulley (which is distinct) using pattern match
    2. Define a co ordinate based on this pattern match.
    3. Define edge detection tool using the co ordinate (this is where I am running into a wall)
    I have used extracts of examples : battery inspection, gage and fuse inspection
    I am not able to define the edge tool (Edge detector under vision assistant 7.1)
    I am trying to use the co ordinates, since if the pulley moves a bit, then the edge detector appears away (in Vision assistant)
    THE CATCH IS:
    I have to do this in VB since Machine vision has to be integrated into existing vb application.
    NOTE: attached image of pulley
    Now can some one help me PLS
    Thanks in advance
    Suresh
    Attachments:
    pulley.png ‏13 KB

    Hi Suresh -
    I took your image and expanded the background region to make three versions with the pulley in different positions.  Then I loaded the three images into Vision Assistant and built a script that finds the teeth of the pulley.  Although Vision Assistant can't generate coordinate systems, I used edge detection algorithms to define a placeholder where the coordinate system code should be added.
    I chose to use a clamp and midpoint instead of the Pattern Matching tool because of the nature of the image.  Silhouetted images are difficult to pattern match, and the vertical line symmetry of the pulley makes it even more difficult to find a unique area of the object that can be recognized repeatedly.  Instead, I generally find more success using algorithms that are designed around edge detection.  I assumed that the "notch" at the bottom of the pulley will always be present in your images, so I performed a Clamp in the Horizontal Min Caliper to find this notch.  The midpoint of this clamp section can be used as the origin of a coordinate system around which the following Edge Detection steps are performed.  This allows you to find the teeth of the pulley no matter where it is in the image.  (Note that the VA script was built using pulley with BG 3.png.)
    The Builder File that was generated from the script gives you all the code you need except for the Caliper function and the Coordinate System functions.  I added comments to these sections to show what type of code you will have to add there.
    It may not be exactly the application you need to perform, but it should give you a solid starting point.  I hope it helps.
    David Staab, CLA
    Staff Systems Engineer
    National Instruments
    Attachments:
    Pulley Teeth.zip ‏18 KB

  • Edge detection using IMAQ Find Edge/IMAQ Edge Tool 3

    Hi,
    I have several images with useless background around a rectangular ROI (coordinates unknown!). So I tried using the two VIs mentioned above in order to detect these edges so that I can remove them. Regretfully, this does not work as planned.
    IMAQ Find Edge usually finds an edge, but not where it should be. The edge detection is earlier than I want it to be.
    IMAQ Edge Tool 3 sometimes does not find an edge at all, sometimes it finds the edge perfectly. Here I use the 'get best edge' option, which delivers the best results with all the images I tested it with.
    All the other options are also not changed while running the VI with the images I have.
    Does anyone have intimate knowledge of these VIs' algorithms, how they work, how they can be manipulated, ... ?

    Hi,
    Can you upload an example image?
    That would clarify what you're trying to do?
    Most of the time a change of mindset solves the problem.
    Kind regards,
    - Bjorn -
    Have fun using LabVIEW... and if you like my answer, please pay me back in Kudo's
    LabVIEW 5.1 - LabVIEW 2012

  • Crued and fast edge detection.

    hi, i need a way to do a fast edge detection. ive already got something using getpixel but get pixel is really really slow. does anyone have a faster method. it needs to work on a 300mhz processor and 64 megs of ram. accuraccy doesnt really concern me. thanx in advance.

    Hi! I dont hav a solution for your query. I m doing a project in Image Processing and need to know the pixel values for particular pixel at a given coordinate. Can u post the code for getPixel ? I dont know how to make the getPixel work. Please post the whole code.
    Thank you.

  • Use of edge detection in pattern matching algorithm?

    Hello all,
                    I work for a group at Texas A&M University researching two-phase flow in reactors.  We have been using IMAQ Vision and had a question regarding the use of edge detection in the pattern matching algorithm.  I had seen the webcast entitled “Algorithms that Learn: The Sum and Substance of Pattern Matching and OCR” (http://zone.ni.com/wv/app/doc/p/id/wv-705) and in the webcast it was mentioned that the pattern matching algorithm uses edge detection to, (as best I can tell), reduce the candidate list further and to perform subpixel location calculations.  However, I was wondering if this edge detection process is still performed if we do not use the subpixel location calculation (i.e. if we uncheck the “Subpixel Accuracy” check box)?  Also, if edge detection is performed in the pattern matching algorithm is it consistent with the method described in Chapter 13 of the Vison Concepts Manual (“Geometric Matching”)?  Finally, if edge detection is performed in a manner consistent with Chapter 13 of the manual, how does the geometric matching correlation number affect the correlation calculation that was performed in the previous steps?  Are they simply multiplied together?
    Many thanks!
      -Aaron

    Jeff,
    We are using Imaq Vision Builder 4, with the included pattern matching that can be accessed via the menus (i.e. we haven't created a custom VI or anything.)  We are using the software to locate bubbles during boiling experiments and want a deeper understanding of what is going on "behind the scenes" of the algorithm, as we may have to explain how it works later.  We have been able to determine most of what we need from the webcast I had previously mentioned, except for the use of edge detection in the pattern matching algorithm.
    At the scales involved in our experiments, subpixel accuracy is really not needed and therefore we do not use it.  If edge detection is used in the pattern matching algorithm only to determine location with subpixel accuracy, then we do not really need to know how it works because we do not use that calculation.  Inversely, of course, if edge detection is used during pattern matching even without enabling subpixel accuracy, then we would like to have a fairly good understanding of the process.
    I've read most of the section on geometric matching in the Vision Concepts Manual and wondered if the process described there for edge detection (or feature matching) was also used in the basic pattern matching algorithm?
    To summarize, if edge detection is not used in the basic pattern matching algorithm without subpixel accuracy, then that is all I need to know.  If edge detection is used for pattern matching even without using the subpixel accuracy calculation, then we would like to learn more about how exactly it is used in the pattern matching algorithm.
    We would really appreciate any help you could give us... we've been digging around on the NI website for a couple of weeks now trying to fit together all the pieces of the pattern matching puzzle.
    Many thanks!
        Aaron

  • Explanation of Edge Detection of Digital Images

    Can anyone suggest the best links for the complete explanation of doing edge detection using java.

    http://en.wikipedia.org/wiki/Edge_detection
    http://forum.java.sun.com/post!reply.jspa?messageID=4371954
    If you have specific questions regarding implementing what you learn we will try to help you with those.
    regards

  • Using IMAQ Find Edge and/or IMAQ Find Straight Edges 3 In order to return tip coordinates of a needle

    I know that there are functionalities in the two aforementioned functions that allow for start and end points to be found and output after a straight line is found in an ROI. My problem is that these start and end points are with respect to the ROI, and not the the actual object. What I mean by this is that if I have a needle (whose tip is in the ROI) and try to find the corresponding edge, the edge is easy enough to find. However, the red line that is superimposed on the image, representing the found edge is extended past the tip of the needle, and carried until the boundary of the rectangular ROI I am using. Thus when I return the end point, it gives me the end point along the ROI boundary, not the tip of the needle.
    If anyone knows how to obtain the coordinates of the actual needle tip using the edge funtions in the post subject line, that would be very helpful and much appreciated!

    If the needle is of the same height you can adjust the ROI accordingly. Also by tip do you mean the tip point?

Maybe you are looking for

  • Adobe Reader XI 11 slow in browser but fast in Reader..

    We are required to use IE 8, within IE we have an intranet which references PDF files. If I open the PDF file from the network \\fqdn\share\xyz.pdf it works and is very fast (in PDF Viewer) If I open the same pdf, from the same path inside of IE 8, i

  • MDX - Help with IIF syntax

    I have a member in Accounts named 'Marketing' that needs a MDX formula with this logic: If the current member in the Market dimension is an idescendant of 'Total_Markets' Then [Corp_Level] * ( [Sales] / ( [Sales] , [Total_Markets] ) ) , [Corp_Level]

  • Need help deleting apple id

    i bought an iphone 4 from a friend , when i got it i made a new apple id, he just bought a new iphone and started using his apple id again, now im recieving his text messages and we cant figure out how to stop it ..

  • IPhone backup file size

    When I look at the size of the files in my backup folder nothing is larger than 300mb yet I have a 16gb iPhone.  Is it actually doing incremental backups?  I was concened about a bunch of 16gb backups piling up on my harddrive but it doesn't look lik

  • Passing Variable back to batch script

    I need to pass the result of a sql script back to the calling batch script. How would I do this? This is what I have tried Batch Script return_code = 'sqlplus loginname/pass@database path/sqlscript.sql' echo $return_code sqlscript Declare v_locked va