Sobel Edge detectors

Hi Everyone! I have an jpeg image and do edge detection on this image. I plan to use sobel edge detector to help me. I have
just gotten the JAI API. I realized that they have a Packages.javax.media.jai package with KernelJAI.GRADIENT_MASK_SOBEL_HORIZONTAL and KernelJAI.GRADIENT_MASK_SOBEL_VERTICAL.
I believe that using this will make implementing the edge detection easier.
The problem is that I m a newbie at this and not sure on how to use this.. I hope someone can guide me on how I would be
able to implement this. Any help would be greatly appreciated.
Thanks in advance

Hi Everyone! I have an jpeg image and do edge detection on this image. I plan to use sobel edge detector to help me. I have
just gotten the JAI API. I realized that they have a Packages.javax.media.jai package with KernelJAI.GRADIENT_MASK_SOBEL_HORIZONTAL and KernelJAI.GRADIENT_MASK_SOBEL_VERTICAL.
I believe that using this will make implementing the edge detection easier.
The problem is that I m a newbie at this and not sure on how to use this.. I hope someone can guide me on how I would be
able to implement this. Any help would be greatly appreciated.
Thanks in advance

Similar Messages

  • Anyone has a canny edge detector vi that can be opened in Labview 7.0??

    Hi,
    Anyone has a canny edge detector vi that can be opened in Labview 7.0??
    Thanks and many thanks.!

    Hi aaz,
    If you are using RoboLab I would suggest posting to the Lego Mindstorms Forums as
    they offer RoboLab support.  The screenshot I had posted was for
    LabVIEW using the Vision Development Module (VDM).  The VIs that you
    are missing are part of the NI-Vision software which comes packaged
    with VDM. Best of luck with your RoboLab application.
    Vu

  • Edge detector in Labview (6i)

    I'm trying to build an edge detector SubVI. It has an input which may
    go active (high) for some time. Upon the input going high, I would
    like the output to go high until the next time through. I would like
    to re-use the SubVI many times within the one VI.
    I've tried a SubVI containing a local variable remembering the old
    state of the input and comparing this to the present state. and this
    works well with only one instance. There is, however, a major problem
    when two instances of this SubVI exist in the same VI. Either the
    entire SubVI or the local variable are shared between the two
    instances of the SubVI and there are major interactions.
    Can anyone throw some light on this? How do people normally
    remember state in
    formation in SubVI's that are instantiated many
    times?
    Thanks,
    [email protected]

    Never mind, I found a solution. Tick the "Reentrant Execution" box on
    the "VI Properties" of the Sub VI.
    Regards,
    Alf Katz
    [email protected]
    On Thu, 22 Feb 2001 05:16:24 GMT, [email protected] (Alf
    Katz) wrote:
    >I'm trying to build an edge detector SubVI. It has an input which may
    >go active (high) for some time. Upon the input going high, I would
    >like the output to go high until the next time through. I would like
    >to re-use the SubVI many times within the one VI.
    >
    >I've tried a SubVI containing a local variable remembering the old
    >state of the input and comparing this to the present state. and this
    >works well with only one instance. There is, however, a major problem
    >when two instances of this SubVI exist in the same VI. Either the
    >entire SubV
    I or the local variable are shared between the two
    >instances of the SubVI and there are major interactions.
    >
    >Can anyone throw some light on this? How do people normally
    >remember state information in SubVI's that are instantiated many
    >times?
    >
    >Thanks,
    >[email protected]

  • Sobel edge detection

    Hi everybody;
    I want to learn sobel edge detection,how can i do this detection in java?

    So a sobel edge detection is nothing special. It's just a convolution with two specific kernels.
    float[] hx = new float[]{-1,0,1,
                              -2,0,2,
                              -1,0,1};
    float[] hy = new float[]{-1,-2,-1,
                               0, 0, 0,
                               1, 2, 1}; You just take the euclidean distance of the results. The following algorithm below would be a
    sobel edge detection. Although technically speaking, it's just a general convolution algorithm.
    There are also alot of optimizations that can be made (the algorithm is slow), but I leave that for you to explore.
    public static BufferedImage sobelEdgeDetection(BufferedImage img) {
            BufferedImage edged = new BufferedImage(img.getWidth(),img.getHeight(),
                    BufferedImage.TYPE_INT_RGB);
            float[] hx = new float[]{-1,0,1,
                                     -2,0,2,
                                     -1,0,1};
            float[] hy = new float[]{-1,-2,-1,
                                      0, 0, 0,
                                      1, 2, 1};
            int[] rgbX = new int[3]; int[] rgbY = new int[3];
             //ignore border pixels strategy
            for(int x = 1; x < img.getWidth()-1; x++)
                for(int y = 1; y < img.getHeight()-1; y++) {
                    convolvePixel(hx,3,3, img, x, y, rgbX);
                    convolvePixel(hy,3,3, img, x, y, rgbY);
                    //instead of using sqrt function for eculidean distance
                    //just do an estimation
                    int r = Math.abs(rgbX[0]) + Math.abs(rgbY[0]);
                    int g = Math.abs(rgbX[1]) + Math.abs(rgbY[1]);
                    int b = Math.abs(rgbX[2]) + Math.abs(rgbY[2]);
                    //range check
                    if(r > 255) r = 255;
                    if(g > 255) g = 255;
                    if(b > 255) b = 255;
                    edged.setRGB(x, y,(r<<16)|(g<<8)|b);
            return edged;
        private static int[] convolvePixel(float[] kernel, int kernWidth, int kernHeight,
                BufferedImage src, int x, int y, int[] rgb) {
           if(rgb == null) rgb = new int[3];
            int halfWidth = kernWidth/2;
            int halfHeight = kernHeight/2;
             /*this algorithm pretends as though the kernel is indexed from -halfWidth
                       *to halfWidth horizontally and -halfHeight to halfHeight vertically. 
                       *This makes the center pixel indexed at row 0, column 0.*/
            for(int component = 0; component < 3; component++) {
                float sum = 0;
                for(int i = 0; i < kernel.length; i++) {
                   int row = (i/kernWidth)-halfWidth;  //current row in kernel
                   int column = (i-(kernWidth*row))-halfHeight; //current column in kernel
                   //range check
                   if(x-row < 0 || x-row > src.getWidth()) continue;
                   if(y-column < 0 || y-column > src.getHeight()) continue;
                    int srcRGB =src.getRGB(x-row,y-column);
                    sum = sum + kernel*((srcRGB>>(16-8*component))&0xff);
    rgb[component] = (int) sum;
    return rgb;
    }Using this algorithm on the image to the left, I get the image to the right.
    http://img368.imageshack.us/my.php?image=pooluk3.png
    (click on image to resize)
    Notice that because I ignored the border pixels there is an unsightly green line at the top of the edged image.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Import images into Visual Edge Detector

    I am new to this tool, infact very new. I am working on the image analysis of cancer CT images. I would like to know if it is possible to import the DICOM (or any other image format) images into this tool and get the edges detected?
    If not what other tool of NI can I to apply alogrithms to edge detect?
    Thanks in advance.

    Hi, Elaine. Thanks for your inquiry. Here's my basic workflow:
    Output JPG from Photoshop at intended size, say 300 x 400 px @ 72ppi.
    Import to stage in Edge Animate - no options here. Image seems considerably less sharp, right on the stage - I have multiple identical monitors, so I can compare.
    Use Edge Animate's Preview in Browser (in this case, Firefox for general testing purposes...) the images are not sharp.
    If I DOUBLE the resolution when outputting from PS (600 x 800px @72 ppi), saving with the identical filename, as resdesign suggested, and change nothing in Edge, the outputted images look great. I have tested with many bitmaps and they all exhibit the SAME behavior.
    Extremely inconvenient to have to manually re-calculate image output sizes to accommodate this problem in Edge.

  • Edge detection on a moving object

    Hi
    I have a problem on edge detection. I have a pulley of different types.
    Pulleys where Dia is same, but height differs. Pulleys of different dia but the number of teeth (ridges) vary.
    I need to identify one type of pulley from the other. I am trying to use the following logic:
    1. Locate the base of the pulley (which is distinct) using pattern match
    2. Define a co ordinate based on this pattern match.
    3. Define edge detection tool using the co ordinate (this is where I am running into a wall)
    I have used extracts of examples : battery inspection, gage and fuse inspection
    I am not able to define the edge tool (Edge detector under vision assistant 7.1)
    I am trying to use the co ordinates, since if the pulley moves a bit, then the edge detector appears away (in Vision assistant)
    THE CATCH IS:
    I have to do this in VB since Machine vision has to be integrated into existing vb application.
    NOTE: attached image of pulley
    Now can some one help me PLS
    Thanks in advance
    Suresh
    Attachments:
    pulley.png ‏13 KB

    Hi Suresh -
    I took your image and expanded the background region to make three versions with the pulley in different positions.  Then I loaded the three images into Vision Assistant and built a script that finds the teeth of the pulley.  Although Vision Assistant can't generate coordinate systems, I used edge detection algorithms to define a placeholder where the coordinate system code should be added.
    I chose to use a clamp and midpoint instead of the Pattern Matching tool because of the nature of the image.  Silhouetted images are difficult to pattern match, and the vertical line symmetry of the pulley makes it even more difficult to find a unique area of the object that can be recognized repeatedly.  Instead, I generally find more success using algorithms that are designed around edge detection.  I assumed that the "notch" at the bottom of the pulley will always be present in your images, so I performed a Clamp in the Horizontal Min Caliper to find this notch.  The midpoint of this clamp section can be used as the origin of a coordinate system around which the following Edge Detection steps are performed.  This allows you to find the teeth of the pulley no matter where it is in the image.  (Note that the VA script was built using pulley with BG 3.png.)
    The Builder File that was generated from the script gives you all the code you need except for the Caliper function and the Coordinate System functions.  I added comments to these sections to show what type of code you will have to add there.
    It may not be exactly the application you need to perform, but it should give you a solid starting point.  I hope it helps.
    David Staab, CLA
    Staff Systems Engineer
    National Instruments
    Attachments:
    Pulley Teeth.zip ‏18 KB

  • USB Camera Flicker, and Showing Edge Finder Results

    Attached are 3 files.  The vi (simple.vi), the variable library and the project file.  Hopefully it reassembles easily.
    One issue I am having is that the image 'flickers', and it seems to be creating havoc with the vision tools.  What is also strange is that every once and a while, it seems a 'raw' image (meaning color) slips in, which may actually be the flicker problem.  I am using a USB camera to capture the video.
    I thought of using a shift register, which helps a little, but the problem persists.  I have a feeling it is something dumb I am missing.
    Also, I am trying to figure out how to get the 'Imaq Simple Edge' to show up on the screen, like the 'Imaq Find Edge' does.  I am assuming there is a Boolean somewhere I am missing.  I would like to use this to see if the edges are where they are supposed to be, as well as if the lines are drawn in the right place.  This is mainly for debugging purposes.  What I have been using in the interim is the Vision Assistant to see if the calculated values line up with where they should be.  It would be much nicer to have it in the VI.
    In case you are curious, what I am trying to do is find 4 discs on any of 3 pins.  The camera and target are not locked together, so I am using the top, bottom and left edges to locate the base, then calculate where the pins should be, then draw edge find lines to see if they are there.  Ultimately the logic (not included yet) will allow me to determine which disc is where.
    I tried pattern matching, but this setup is too 'random' for it in testing.  The basic vision setup in this vi seems to work, and is similar to what I have used in industrial applications in the past.
    Thanks for any help.
    Rich
    Attachments:
    Simple.vi ‏216 KB
    Variables.lvlib ‏17 KB
    Tower.lvproj ‏8 KB

    Thank you for the feedback.
    I tried the 'Snapshot' option, and the graphic overlays disappeared.  Ultimately, this doesn't matter, but while I am still tinkering, it is useful to have it displayed.
    What seems to have fixed it, is I removed the shift-register, and got rid of the Imaq Cast.  Instead, I am using the 'IMAQ Create' to make the grayscale image.  This has gotten rid of the twitch, as far as I can tell, allows the overlay, but the image displayed is color (not a big deal).  Attached is the updated VI/
    I still can't figure out how to get the overlay for the Edge Detector lines.  Attached is a picture of what I would like to display with the edge finding lines (the little green squares).
    I am assuming the Edge Detector is less processing intensive than the Find Straight Edge.  This is the reason I am using Edge Detectors to find the discs.  If this is incorrect, I can probably use the find Straight Edge to find the edge I am looking for.
    Attachments:
    overlay.JPG ‏112 KB
    Simple.vi ‏216 KB

  • Firefox keeps crashing even in safemode and with new profile.

    Firefox started crashing every minute or so after a recent Windows update (coincidence). After trying the following procedures (see below), creating a new profile solves the problem until I select "Automatically add" on this page, so the problem must be related to running scripts. (it also means I can't automatically add my troubleshooting information)
    These are the procedures I attempted after the problem started which did NOT fix the problem:
    Uninstalled most recent Windows updates.
    Disabled Networxs and Spybot.
    Ran Firefox in safe mode.
    Run full virus scan (Malwarebytes).
    Reinstalled Firefox.
    System restore to before most recent Windows updates.
    System restore to last week (problem only started yesterday).
    Re-reinstalled Firefox.
    (note: my drivers are all up to date)
    Created a new profile which this allows me to browse without crashing until (as far as I can tell) Firefox attempts to run certain scripts. In the old profile Firefox crashes within two minutes even if I do nothing except open the browser and leave it alone.
    Given that the problem persisted even in safe mode the problem must not be add-on related. And since Firefox still crashes with a new profile is must not be a configuration conflict. But it also pesists after a system restore and being reinstalled, so it must be something Firefox uses outside of the profile files that isn't an add on and isn't overwritten.
    I don't want to do a complete clean install because that would mean losing my history, passwords, customized add-ons (user dictionary, AdBlock scripts, NoScript allow/block list, etc.), bookmarks (I realise I can recover those), and custom about:config lines. It was working fine two days ago so it has to be something that auto-updates in the background because I have all updates (Firefox, Windows, etc.) set to manual exactly so I can avoid nasty surprises like this one.
    Any help in resolving this issue so I don't have to start again from scratch and lose literally years of customization would be greatly appreciated.

    Note that safe mode does not disable plugins (only add-ons). The issue here is most likely a corrupt plugin. It looks like you have a lot of plugins installed, including some known to be very problematic. Specifically these ones:
    NPRuntime Script Plug-in Library for Java(TM) Deploy
    Questrade IQ Edge detector plugin
    Citrix Online App Detector Plugin
    Coupons, Inc. Coupon Printer Plugin
    NPWLPG
    In order to assist you better, please follow the steps below to provide us crash IDs to help us learn more about your crash.
    #Enter ''about:crashes'' in the Firefox address bar and press Enter. A Submitted Crash Reports list will appear, similar to the one shown below.
    #Copy the '''5''' most recent Report IDs that start with '''bp-''' and then go back to your forum question and paste those IDs into the "Post a Reply" box.
    ''' Note:''' If a recent Report ID does not start with '''bp-''' click on it to submit the report.
    (Please don't take a screenshot of your crashes, just copy and paste the IDs. The below image is just an example of what your Firefox screen should look like.)
    ;[[Image:aboutcrashesFx29|width=500]]
    Thank you for your help!
    More information and further troubleshooting steps can be found in the [[Firefox crashes - Troubleshoot, prevent and get help fixing crashes]] article.

  • How to calculate the phase difference between two square wave (acquired from two channel in one DAQ)

    Hello everyone ,
     I need a quickly help that as below :
    I am trying use PCI-6220 to acquired six signals from one rotation encoder (channel A, channel B,channel Z ,and their non-signals) .The encoder out signals will be square wave and 4000pulses per revolution. I set it rotation at 300rpm speed. I need show every square wave of six out channels in waveform and measure the A-B phase difference to check if the value is correct (designed value should be 90deg) .
    I have no idea how to measure or calculate the phase difference of two square wave base on synchronizate to acquire these two square wave from two channel on the PCI-6620....
    Anyone can give a idea how to calculate the phase difference with two square wave ?
    Thank a lot and Thanks again...
    Tim

    Tim,
    Here is a simple rising edge detector for one channel.
    Lynn
    Attachments:
    Rising edge.vi ‏15 KB

  • Modifying VI - Vision Assistant

    I have images that consist of an area of particles and an aera of no particles. I am trying to fit a circle to the edge, between the regions where there are and are not particles. I want to use the find edge tool, and I want to find the pixel where this transistion takes place for every row of pixels. For example, I want to draw a horizontal line that will give me the location of the edge then move down one row and repeat. I have tried using the find circle edge, but since I am trying to fit a circel to an edge that isn't well defined, I need a lot more data points to average over. I figure there is a way to modify the VI to perform the process I described above. Any help would be much appreciated. I have attached the images to give you a better idea of what I'm trying to do.
    Attachments:
    half circle.JPG ‏554 KB

    Hi Windom,
    If the find circle edge is not working for you, I would suggest thresholding the image. Then you could use the morphology functions (such as Close, Fill Holes, and Erode) to further manipulate the image to get a stronger edge between the areas of particles and not particles.
    You can use a For Loop (initialized to start looking at the top of picture) and have it iterate vertically down the picture with the Edge Detector. You can do that by changing the ROI Descriptor for the line you are detecting edges with, and then you can read the Edge Information out of the VI. These all need to be checked in the "Select Controls" menu, which is found at the bottom right of the Vision Assistant window.
    I hope this helps, let me know if you need any further clarification.
    Best Regards,
    Nathan B
    Applications Engineer
    National Instruments
    Attachments:
    SelectROI.JPG ‏12 KB
    Edge Info.JPG ‏10 KB
    SelectControls.JPG ‏14 KB

  • Filter not working in After Effects

    Hi there,
    I've improved my Sobel Edge Detection Filter a little bit and tried to use it in After Effects on some video material. But to no avail. It simply outputs a black frame. In Photoshop and Pixel Bender itself it works like a charm. But not in AE. Could somebody please tell me why?
    Here's the code: http://pastebin.com/TUbiGFND
    Thanks a lot!
    Greetings,
    Hendrik

    Pixel Bender and After Effects act totally oddly. I really don't get it.
    I have been trying the following lately:
    define two parameters float3x3 to replace the constant matrices in the edgeX and edgeY functions by two dynamic ones. They are not editable in after effects but behave differently from the constantly defined matrices although they have the same values. With two (uneditable) parameters I can see some effect in AE that is different from the one in Pixel Bender IDE but at least there is something happening. I get completely white frames which I can turn into something visible by turning down the parameters Bias (to zero) and Factor (to something less than five or so). With constantly defined matrices inside the functions edgeX and edgeY nothing happens. The image simply stays black.
    Where is the difference between a parameter float3x3 and a local variable float3x3 with absolutely and definitely the same values? And why does AE act differently from Pixel Bender IDE?
    I also have been playing around with region reasoning functions needed and changed but I can't by the life of me get any results by altering them. Not even any negative results. They seemingly do nothing.

  • Detecting corners in an image

    I am currently trying to detect the corners in an image using a tecnique called Harruis Corner detector, I am using some code in Matlab to help me do this but it is not working.
    Below I have the code in java and MatLab
    Below is my code in java
    private int[] pixels;
    private int[] grey_scale_image;
    private double[][] greylevels;
    private BufferedImage image;
    private Image result_image;
    private int[] convolvedImage;
    Point[] corners = new Point[100];
    Pixel[][] pixelObjects = new Pixel[256][256];
    int[][][] data;// = new int[][][];
    double[] result223;
    * Constructor for objects of class HarrisCorner
    public HarrisCorner(BufferedImage image) {
    this.image = image;
    int width = image.getWidth();
    int height = image.getHeight();
    pixels = new int[width * height];
    grey_scale_image = new int[pixels.length];
    handlepixels(image, 0, 0, width, height);
    greylevels = new double[width][height];
    for (int j = 0; j < height; j++) {
    for (int i = 0; i < width; i++) {
    greylevels[i][j] = grey_scale_image[j * width + i];
    double[][] kernel = {{-1, 0, 1} , {-1, 0, 1}, {-1, 0, 1}}; // -1 0 1; -1 0 1; -1 0 1 / 3 (look into that)
    // print(kernel);
    System.out.println("\n\n");
    double[][] kernel1 = {{-1, -1, -1} , { 0, 0, 0}, {1, 1, 1}};
    double[][] smooth_kernel = {{1, 1, 1}, {1, 1, 1}, {1, 1, 1}};
    Matrix Ix = new Matrix(convolution2D(greylevels, width, height, kernel,
    kernel.length, kernel[0].length));
    Matrix Iy = new Matrix(convolution2D(greylevels, width, height, kernel1,
    kernel.length, kernel[0].length));
    System.out.println(Ix.getArray().length + " " + Ix.getArray()[0].length);
    System.out.println(Iy.getArray().length + " " + Iy.getArray()[0].length);
    Matrix Ixy = Ix.times(Iy);
    Matrix Ix2 = Ix.times(Ix);
    Matrix Iy2 = Iy.times(Iy);
    Matrix Ixy2 = Ixy.times(Ixy);
    //compute kernel using guassain smoothing operator guassain_kernel()
    Matrix GIx2 = new Matrix(convolution2D(Ix2.getArray(),
    Ix2.getArray().length,
    Ix2.getArray()[0].length,
    guassain_kernel(), smooth_kernel.length,
    smooth_kernel[0].length));
    Matrix GIy2 = new Matrix(convolution2D(Iy2.getArray(),
    Iy2.getArray().length,
    Iy2.getArray()[0].length,
    guassain_kernel(), smooth_kernel.length,
    smooth_kernel[0].length));
    Matrix GIxy2 = new Matrix(convolution2D(Ixy2.getArray(),
    Ixy2.getArray().length,
    Ixy2.getArray()[0].length,
    smooth_kernel, smooth_kernel.length,
    smooth_kernel[0].length));
    //c = (GIx2 + GIy2) - 0.04 * (GIx2 .* GIy2 - GIxy2.^2);
    Matrix M = (Ix2.plus(Iy2)).minus( ( (Ix2.times(Iy2)).minus(Ixy2.times(Ixy2))).times(0.04));
    //Matrix detM = (Ix2.times(Iy2)).minus(Ixy2.times(Ixy2));
    //Matrix traceM = Ix2.plus(Iy2);
    //Matrix M = detM.minus((traceM.times(traceM)).times(0.04));
    // double[] row2 = M.getRowPackedCopy();
    // Arrays.sort(row2);
    // double theCorner = row2[row2.length-100];
    // System.out.println("theCorner" +theCorner);
    // double[][] responses = M.getArray();
    // for(int i = 0; i < responses.length; i++) {
    // for(int j = 0; j < responses[0].length; j++) {
    // if(responses[i][j] <= theCorner)
    // responses[i][j] = 0;
    //double[][] ww = M.getArray();
    double[][] cr22 = wrapBorder(Iy.getArray());
    //double[][] corner_response = wrapBorder(responses);
    result223 = new double[pixels.length];
    for (int j = 0; j < greylevels[0].length; ++j) {
    for (int i = 0; i < greylevels.length; ++i) {
    pixelObjects[i][j].setValues(i,j,cr22[i][j]);
    if(corner_response[i][j] != 0) {
    //cr22[i][j] =
    result223[j * (greylevels.length) + i] = -1;//((Color.red).getRGB());//cr22[i][j];
    else
    result223[j * (greylevels.length) + i] = cr22[i][j];//greylevels[i][j];
    //System.out.println((Color.red).getRGB());
    //scaleAllGray(result223);
    convolvedImage = doublesToValidPixels(result223);
    for (int i = 0; i < corners.length; i++) {
    int index = row2.length - corners.length + i;
    double pixel = row2[index];
    for(int h = 0; h < 256; h++) {
    for(int w = 0; w < 256; w++) {
    Pixel p = pixelObjects[w][h];
    if(p.getPixel() == pixel) {
    corners[i] = new Point(p.getX(),p.getY());
    //if(i==99) break;
    This is the code in Matlab :
    % % HISTORY
    % 2001 Philip Torr ([email protected], [email protected]) at Microsoft
    % Created.
    % Copyright � Microsoft Corp. 2002
    % REF:     "A combined corner and edge detector", C.G. Harris and M.J. Stephens
    %     Proc. Fourth Alvey Vision Conf., Manchester, pp 147-151, 1988.
    %%to do: we might want to make this so it can either take a threshold or a fixed number of corners...
    % c_coord is the n x 2 x,y position of the corners
    % im is the image as a matrix
    % width is the width of the smoothing function
    % sigma is the smoothing sigma
    % subpixel = 1 for subpixel results (not implemented yet)
    %%%%%bugs fixed Jan 2003
    function [c_coord] = torr_charris(im, ncorners, width, sigma, subpixel)
    if (nargin < 2)
    error('not enough input in charris');
    elseif (nargin ==2)
    width = 3; %default
    sigma = 1;
    end
    if (nargin < 5)
    subpixel = 0;
    end
    mask = [-1 0 1; -1 0 1; -1 0 1] / 3;
    % compute horizontal and vertical gradients
    %%note because of the way Matlab does this Ix and Iy will be 2 rows and columns smaller than im
    Ix = conv2(im, mask, 'valid');
    Iy = conv2(im, mask', 'valid');
    % compute squares amd product
    Ixy = Ix .* Iy;
    Ix2 = Ix.^2;
    Iy2 = Iy.^2;
    Ixy2 = Ixy .^2;
    % smooth them
    gmask = torr_gauss_mask(width, sigma);
    %gim = conv2(im, gmask, 'valid');
    %%note because of the way Matlab does this Ix and Iy will be width*2 rows and columns smaller than Ix2,
    % for a total of (1 + width)*2 smaller than im.
    GIx2 = conv2(Ix2, gmask, 'valid');
    GIy2 = conv2(Iy2, gmask, 'valid');
    GIxy2 = conv2(Ixy2, gmask, 'valid');
    % computer cornerness
    % c = (GIx2 + GIy2) ./ (GIx2 .* GIy2 - GIxy2 + 1.0);
    %%%one problem is that this could be negative for certain images.
    c = (GIx2 + GIy2) - 0.04 * (GIx2 .* GIy2 - GIxy2.^2);
    %figure
    %imagesc(c);
    %figure
    %c is smaller than before got border of 2 taken off all round
    %size(c)
    %compute max value around each pixel
    %cmin = imorph(c, ones(3,3), 'min');
    %assuming that the values in c are all positive,
    %this returns the max value at that pixel if it is a local maximum,
    %otherwise we return an arbitrary negative value
    cmax = torr_max3x3(double(c));
    % if pixel equals max, it is a local max, find index,
    ci3 = find(c == cmax);
    cs3 = c(ci3);
    [cs2,ci2] = sort(cs3); %ci2 2 is an index into ci3 which is an index into c
    %put strongest ncorners corners in a list cs together with indices ci
    l = length(cs2)
    lb = max(1,l-ncorners+1);
    cs = cs2(lb:l);
    ci2s = ci2(lb:l);
    ci = ci3(ci2s);
    corn_thresh = cs(1);
    disp(corn_thresh);
    %row and column of each corner
    [nrows, ncols] = size(c);
    %plus four for border
    % c_row = rem(ci,nrows) +4;
    % c_col = ( ci - c_row )/nrows + 1 +4;
    border = 1 + width;
    c_row = rem(ci,nrows) + border;
    c_col = ( ci - c_row +2)/nrows + 1 + border;
    % %to convert to x,y we need to convert from rows to y
    c_coord = [c_col c_row];
    %see Nister's thesis page 19.
    if subpixel
    disp('subpixel not done yet')
    end
    %display corners....
    %hold on
    %          plot(c_col, c_row, '+');
    %          plot(c_coord(:,1), c_coord(:,2), '+');
    % hold off
    %index runs
    % 1 4
    % 2 5\
    % 3 6
    %ci = ci + 4 * nrows + 4;
    %ci = (nrows + 4) * 4 + 4 + ci;
    %c_patches = [gim(ci - nrows) gim(ci - nrows-1) gim(ci - nrows+1) gim(ci-1) gim(ci) gim(ci+1) gim(ci+nrows) gim(ci+nrows+1) gim(ci+nrows-1)];
    % hold on
    %      imagesc(im);
    %          plot(c_col, c_row, 'sw')
    % hold off
    %      size(im)
    %           size(cp)
    %          imr = im2col(im, [5,5])';
    %          im_corr = imr(ci);
    %      im(c_row(:)-1, c_col(:)-1)
    %          each row is a 3x3 matrix
    %      c_patches = [ im(c_row-1, c_col-1) im(c_row-1, c_col) im(c_row-1, c_col+1) ...
    % im(c_row, c_col-1) im(c_row, c_col) im(c_row, c_col+1) ...
    % im(c_row+1, c_col-1) im(c_row+1, c_col) im(c_row+1, c_col+1) ];
    %c_patches = [im(ci-1) im(ci) im(ci+1)];
    %c_patches = [im(ci)];
    Any help would greatly appreciated

    Did you see the button labelled "code" between the subject textfield and the message textarea in the submission form? It allows you to post legible code samples...

  • Including Line profile in an Imaq Script for a grey scale image.

    We are trying to include a line profile in a script so that we can assess
    the distance between the lines printed (as per attachment)in a grey scale
    image.
    Once the line profile is in the script we would then like to run a batch
    file on a series of images.
    Attachments:
    Test_1.JPG ‏67 KB

    It sounds like you are using Vision Builder or Vision Assistant (new name, same program).
    What you want to use is the Edge Detector. Use it to draw a line across your image, then adjust the parameters until all the edges are found. I reduced the contrast to 25 and it found all the edges.
    You can View Measurements under Tools to see the X and Y coordinates of each edge. You can also use batch processing and export the results. I didn't test it, but I assume if you export the results for the edge detection you will get the same list of points.
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Non deterministic FPGA - MCU interface (data bus)?

    Hello,
       Working on a learner project. ..  Using a Digilent Spartan 3A dev board where I have implimented counters for rotary encoder signals.  The counters seem correct as they are accurate for measuring displacement (distance/rotations).  The counters are presente to my MCU (LPC1769 Cortex M3) via GPIO...  16bit "data bus and 4bit "address bus".
       On a 1Khz interrupt, the MCU will write the address lines, delay 1us, and read the 16 bits data (GPIO) connected to the FPGA.  For testing, a function gereator is providing a consistent  signals to the FPGA with no physical encoders ,so I know the what the counter data shoud be.  However, the counter data being read from the FPGA has too much inconsistency and smells like a classic determinism problem.  
       I'm thinking the problem is in my verilog implimentation of the bus, shown here:
    module FPGA_Interface(
    input [3:0] Signals,
    input CLK_50M,
    input [3:0] Addr,
    output Valid,
    output wire [15:0] Data
    //Use of 'Valid' for handshaking is TBD
    assign Valid = 1'b0;
    wire [15:0] CountData0;
    wire [15:0] CountData1;
    //create two counters for the rotary encoders
    QuadratureCounter counter0(.EncSig(Signals[1:0]),.CLK_50M(CLK_50M),.CountData(CountData0) );
    QuadratureCounter
    counter1(.EncSig(Signals[3:2]),.CLK_50M(CLK_50M),.CountData(CountData1) );
    assign Data = (Addr == 4'b0000)? ~CountData0:
    (Addr == 4'b0001)? ~CountData1:
    (Addr == 4'b0010)? ~16'b0000000000000010:
    (Addr == 4'b0011)? ~16'b0000000000000011:
    16'b0000000000000111;
    endmodule
          Perhaps I should "latch" the data  to eliminate the potential of the data changing at read time? Any feedback on the design of the "data bus" would be grealy appreciated. Thanks,

    aszeghy wrote:
    Thank you bassman59 for the explanation.  Understood, now at least, this is a design problem. Although, I originally wanted to gate or "latch" the counter data so that it can be reliably ready by the MCU.  I thought I was getting that by latching the count data when the address lines changed state...  Apparently not.  This further confuses me.
    Looking at this academic mux example I see:
    always @ (posedge clk )
    if (reset == 0) begin
    y <= 0;
    end else if (sel == 0) begin
    y <= a;
    end else begin
    y <= b;
    end
    Now in this synthesised mux module y is updated on the positive edge of clk. Is y updated on a change to a or b?   If y is not updated then the sensitivity list has not been ignored.
    Is this because my code's sensitivity list suggested a change of state (combinational) and the text book example suggests a rising edge (sequential)?
    Thanks again for the education,
    Andrei
    In the code above you used the template for a register (flip-flop). 
    To answer your question, y is updated only on the event in the sensitivity list. Now, that statement is true for all always blocks. It was true for your initial address-decoder example, too, because as you discovered in simulation, the Data output did not change when the counter value changed, it changed only when there was an event on Addr (when the address changed).
    Now, please note my use of the word "template" above. At its core, synthesis is all about template matching, meaning that the synthesizer looks for specific patterns in your code when deciding what sort of logic to infer. And for the case where you would like to infer registers, the template demands that you put only the clock edge detector on the sensitivity list. Why? Because in a real flip-flop, what happens on the D input isn't interesting until a clock edge occurs.
    And that is what happens here. In simuation, everything on the right-hand-side of the assignments are ignored until there is an event which matches what is on the sensitivity list. In this case, the event of interest is the rising edge of the clock. At that moment, all right-hand-sides (which include the conditionals like if, case, etc) are evaluated and then the y output is assigned.
    In the real flipflop inferred by the synthesizer, that too is what happens. This is why for a synchronous block where you are inferring a register there is no simulation/synthesis mismatch when only the clock is on the sensitivity list.

  • Run case once if value X then "only" reset when value X

    Hi, 
    I have a case structure and its "true" section is triggered by a comparison of a value against a constant, so if value > constant it runs the case, the case calls a subVi (gmailv86) but it sends loads of emails because the condition is met lots of times before dropping below the constant. how can I make the case only send "one email" then wait until the value drops below the constant value?
    Hope this makes sense and sorry for being a newbie!
    Steve

    Other than the built in edge detector, you can also find some others online, such as in the OpenG VIs, but it also helps to understand the basic concept behind them:
    In this example, the bottom node is a feedback node, which outputs the value that was fed to it the previous time it was called. This allows you to compare the current value of the boolean to the previous one.
    Try to take over the world!

Maybe you are looking for