Crued and fast edge detection.

hi, i need a way to do a fast edge detection. ive already got something using getpixel but get pixel is really really slow. does anyone have a faster method. it needs to work on a 300mhz processor and 64 megs of ram. accuraccy doesnt really concern me. thanx in advance.

Hi! I dont hav a solution for your query. I m doing a project in Image Processing and need to know the pixel values for particular pixel at a given coordinate. Can u post the code for getPixel ? I dont know how to make the getPixel work. Please post the whole code.
Thank you.

Similar Messages

  • Edge detection or gauging?

    Hi,
         I am working on measuring difference in diameter of small millimeter thick tubes when they are dry and when they are wet. I wish to use image analysis of labview and use edge detection to measure the change in diameter from the original state to their moist state. Can anyone please help me out by naming useful tools I can use to do this. I know of a tool called gauging but I do not know how it works? I was thinking of using pixels to measure the difference as these tubes are 1-5 mm thick in their dry (original) state and when they are wet their diameter only increase on scale of 10-100 micrometers. I have enough optical resolution in the images but I am lost on how to use labview. Any advice would be greatly appreciated.
    Thank You

    Hi Ognee,
    You could definitely use some functions from the Vision Development Module to find the edges of the tubes, and then use a Caliper function to determine the distance between those edges. I'd recommend taking a look at this tutorial about edge detection using the Vision Development Module. I hope this helps!
    David S.

  • Edge detection not so great?

    I have been using PS since v1.0. I have cs4 but have been trying out cs5. With cs4 I only occasionally used the refine edge. I just found it to be to slow. I saw some videos on the new refine edge tools and it looked promising, but when I tried it on a high res image I found it to not really work all that well. Here is a sample. I masked the car with pen tool, then applied a layer mask. Then ran refine edge, started to move the edge dection and found that it started to make my edge choppy. Now I know that you need to also use the adjust edge tools to get the full effect, but it seems like edge detection dose more harm than good? Any thoughts?
    Thanks,
    Steve

    that's why you have an edge detection brush and an edge detection eraser.. and several other parameters and controls that interact between each other and give you a lot of control. or a lot of headaches if you don't know how to use them.. to get the best out of your tools you need to practice with them, and, as i already said, the web is full of precious information, if you search for it...
    try some video tutorials, you wouldn't imagine how many small great things are under the hood in a software like PS..
    Tommaso

  • Edge detection between the finiteelement mesh and a bounding box

    i have an finite element mesh ,i constucuted a bounding box which passes through the mesh ,so now i want to detect the edge of the rectangle which intersects the mesh and construct a new mesh which resides in the rectangular box.how to do this.any kind of help is appreciated.
    P /------------------\ Q
    A |---- |/---|-----|----|-----|--\--| B
    |----/|----|-----|----|-----|---\-|
    |---/-|----|-----|----|-----|----\|
    |--/--|----|-----|----|-----|---- |\
    D |-/---|----|-----|----|-----|---- | \ C
    S /-------------------------------------\ R
    suppose ABCD is FEMesh, PQRS is a bounding box,consider the edge PS when it intersect with the elements in the mesh i will get the pentagon element when i want to form the new mesh which contains the elements which are inside the box.the graphical represenation is not clear if you draw on a paper you can easily understand my problem.
    now how to create a new mesh from the elements which reside inside the boxin which the left hand side of PS will be discarded and RHS of QR edge is discarded,so now mesh will have some zigzag shape i think.so please help me out how to solve this problem.

    i think the figure is not clear , so i want to
    simplify the problem by considering only one element
    of the mesh and only one edge of the box.Okay.
    suppose ABCD
    is a 3D quadrilateral element and PQ is the edge
    passing through the element.i will pass a function
    with coordinates of ABCD and P (startpoint of
    edge)coordinate and its normal vector,if there is an
    intersection the function should return 2 elements
    according to intersection
    now we have element ABCD and an edge with P and its
    normal vector,now we have intersection between element
    and edge from p at X and Y, now the algorithm should
    return AXYD as one element and XBCY as second
    element.please help me out in solving this task.
    Thanks in advance.I'll assume you mean 2D, planar, linear quad. I think the general 3D problem would entail surfaces and be much more difficult. If you use higher order shape functions the 2D problem isn't trivial, because then you might have more than two intersections.
    It COULD return two points of intersection, but you'll have to be careful about special cases:
    (1) The edge could intersect just one of the four corners,
    (2) The edge could coincide with an edge of the element,
    (3) The edge could miss the element entirely and not intersect.
    You'll have to decide what to do in the degenerate cases. The sensible thing would be to do nothing and leave the element alone.
    You'll also be concerned about mesh quality when you're done. Near degenerate elements won't perform well when you analyze them. Will you try to correct these after the fact?
    What if you end up with triangular elements? An edge might slice off a triangle and leave you with a five-noded element. What will you do with those? Create two triangles to get back to a quad? It's not so easy. You really should look into quadtree meshing. There are more general ways to attack a problem like this.
    What kind of physics are you ultimately trying to capture? What are you really doing here?
    You've described the algorithm fairly well, even if you haven't thought it all the way through.
    What's the problem? You'll have to write the code. No one will do that for you. Write some and come back with specific questions if you have difficulties. This is a Java forum, after all.

  • Refine edge and edge detection serious issues, getting blurred and lots of contamination

    Hi guys :)
    a few months back I bought a grey background after being reccomended as being easier to do compositing, I was really pleased but now I have run into some big issues and was wondering if you could spare a few moments to help me out. so I've taken the image of my model on the grey background, the model has white blonde hair, simillar to this image here, http://ophelias-overdose.deviantart.com/gallery/?offset=192#/d393ia7.
    Now what I have been doing is using the quick select tool, then refine edge, when I get to refine edge that's when the issues start for me, when I'm trying to pick up the stray hairs around the model to bring them back into the picture, I'm finding that the hair,when it does come back into the picture is quite washed out and faded, and almost like it has been re painted by Photoshop CS 5 instead of being brought back into the picture, when I paint over the areas with the edge detection tool. Also, even if I check the decontaminate colour box, it doesn't make a blind bit of difference!!
    I'm on a bit of a deadline and am so alarmed I'm getting these issues with the grey background, how are these problems occurring?? can you please give me some idea, I would be really greatful. I have been you tubing, and going over tutorials and reading books to NO avail :(
    I just keep getting this issue, even when I have tried editing a test picture from a google search of a model with brown hair! Still getting this issue.
    this tool is supposed to be amazing, I'm not expecting amazing but I'm atleast expecting it to not look like a bad blurred paint job, with contamination.
    really greatful, thanks :)
    M

    Hi Noel,
    Thank you for the reply. I have attached some screen shots for you.
    Im working with high resolution photos in NEF, I am trying to put the model into a darker background but i havent even got that far as I cant even cut her out.
    Decontaminate doesnt seem to be working at all to be honest, it makes little difference when i check that box.
    Im getting nothing close to the image of the Lion, thats brilliant!
    This is the original image taken on a Calumet grey backdrop
    And this is what I keep getting results wise, see around the hair there is still rather a lot of grey contamination and the hair literlly looks painted and smudged.

  • Use of edge detection in pattern matching algorithm?

    Hello all,
                    I work for a group at Texas A&M University researching two-phase flow in reactors.  We have been using IMAQ Vision and had a question regarding the use of edge detection in the pattern matching algorithm.  I had seen the webcast entitled “Algorithms that Learn: The Sum and Substance of Pattern Matching and OCR” (http://zone.ni.com/wv/app/doc/p/id/wv-705) and in the webcast it was mentioned that the pattern matching algorithm uses edge detection to, (as best I can tell), reduce the candidate list further and to perform subpixel location calculations.  However, I was wondering if this edge detection process is still performed if we do not use the subpixel location calculation (i.e. if we uncheck the “Subpixel Accuracy” check box)?  Also, if edge detection is performed in the pattern matching algorithm is it consistent with the method described in Chapter 13 of the Vison Concepts Manual (“Geometric Matching”)?  Finally, if edge detection is performed in a manner consistent with Chapter 13 of the manual, how does the geometric matching correlation number affect the correlation calculation that was performed in the previous steps?  Are they simply multiplied together?
    Many thanks!
      -Aaron

    Jeff,
    We are using Imaq Vision Builder 4, with the included pattern matching that can be accessed via the menus (i.e. we haven't created a custom VI or anything.)  We are using the software to locate bubbles during boiling experiments and want a deeper understanding of what is going on "behind the scenes" of the algorithm, as we may have to explain how it works later.  We have been able to determine most of what we need from the webcast I had previously mentioned, except for the use of edge detection in the pattern matching algorithm.
    At the scales involved in our experiments, subpixel accuracy is really not needed and therefore we do not use it.  If edge detection is used in the pattern matching algorithm only to determine location with subpixel accuracy, then we do not really need to know how it works because we do not use that calculation.  Inversely, of course, if edge detection is used during pattern matching even without enabling subpixel accuracy, then we would like to have a fairly good understanding of the process.
    I've read most of the section on geometric matching in the Vision Concepts Manual and wondered if the process described there for edge detection (or feature matching) was also used in the basic pattern matching algorithm?
    To summarize, if edge detection is not used in the basic pattern matching algorithm without subpixel accuracy, then that is all I need to know.  If edge detection is used for pattern matching even without using the subpixel accuracy calculation, then we would like to learn more about how exactly it is used in the pattern matching algorithm.
    We would really appreciate any help you could give us... we've been digging around on the NI website for a couple of weeks now trying to fit together all the pieces of the pattern matching puzzle.
    Many thanks!
        Aaron

  • Edge detection on a moving object

    Hi
    I have a problem on edge detection. I have a pulley of different types.
    Pulleys where Dia is same, but height differs. Pulleys of different dia but the number of teeth (ridges) vary.
    I need to identify one type of pulley from the other. I am trying to use the following logic:
    1. Locate the base of the pulley (which is distinct) using pattern match
    2. Define a co ordinate based on this pattern match.
    3. Define edge detection tool using the co ordinate (this is where I am running into a wall)
    I have used extracts of examples : battery inspection, gage and fuse inspection
    I am not able to define the edge tool (Edge detector under vision assistant 7.1)
    I am trying to use the co ordinates, since if the pulley moves a bit, then the edge detector appears away (in Vision assistant)
    THE CATCH IS:
    I have to do this in VB since Machine vision has to be integrated into existing vb application.
    NOTE: attached image of pulley
    Now can some one help me PLS
    Thanks in advance
    Suresh
    Attachments:
    pulley.png ‏13 KB

    Hi Suresh -
    I took your image and expanded the background region to make three versions with the pulley in different positions.  Then I loaded the three images into Vision Assistant and built a script that finds the teeth of the pulley.  Although Vision Assistant can't generate coordinate systems, I used edge detection algorithms to define a placeholder where the coordinate system code should be added.
    I chose to use a clamp and midpoint instead of the Pattern Matching tool because of the nature of the image.  Silhouetted images are difficult to pattern match, and the vertical line symmetry of the pulley makes it even more difficult to find a unique area of the object that can be recognized repeatedly.  Instead, I generally find more success using algorithms that are designed around edge detection.  I assumed that the "notch" at the bottom of the pulley will always be present in your images, so I performed a Clamp in the Horizontal Min Caliper to find this notch.  The midpoint of this clamp section can be used as the origin of a coordinate system around which the following Edge Detection steps are performed.  This allows you to find the teeth of the pulley no matter where it is in the image.  (Note that the VA script was built using pulley with BG 3.png.)
    The Builder File that was generated from the script gives you all the code you need except for the Caliper function and the Coordinate System functions.  I added comments to these sections to show what type of code you will have to add there.
    It may not be exactly the application you need to perform, but it should give you a solid starting point.  I hope it helps.
    David Staab, CLA
    Staff Systems Engineer
    National Instruments
    Attachments:
    Pulley Teeth.zip ‏18 KB

  • Edge detection using IMAQ Find Edge/IMAQ Edge Tool 3

    Hi,
    I have several images with useless background around a rectangular ROI (coordinates unknown!). So I tried using the two VIs mentioned above in order to detect these edges so that I can remove them. Regretfully, this does not work as planned.
    IMAQ Find Edge usually finds an edge, but not where it should be. The edge detection is earlier than I want it to be.
    IMAQ Edge Tool 3 sometimes does not find an edge at all, sometimes it finds the edge perfectly. Here I use the 'get best edge' option, which delivers the best results with all the images I tested it with.
    All the other options are also not changed while running the VI with the images I have.
    Does anyone have intimate knowledge of these VIs' algorithms, how they work, how they can be manipulated, ... ?

    Hi,
    Can you upload an example image?
    That would clarify what you're trying to do?
    Most of the time a change of mindset solves the problem.
    Kind regards,
    - Bjorn -
    Have fun using LabVIEW... and if you like my answer, please pay me back in Kudo's
    LabVIEW 5.1 - LabVIEW 2012

  • Edge Detection

    I'm about to start working on an application that receives images from a wireless webcam (attached to a roving robot) and needs to process them to decide where the robot should travel to next. The processing needs to identify obstacles, walls, and doorways and guide the robot through a door into another room.
    I know I'm going to need to use edge detection on the images, but don't know the best place to start. Any ideas? Also, if anyone has any experience with this, any idea what kind of performance I should expect? Given the limitations of the robot's movement, I will not need to process more than 10 frames per second. Assuming decent processing power, is 10 fps doable? 5?
    Thanks in advance...

    Edge Detection is basically a convolution operation. An image is simply a matrix of pixels. This matrix may be convoluted with another small matrix of values called a kernel.
    // Loading Image
    String imageName = "image.jpg";
    Canvas c = new Canvas();
    Image image = c.getToolkit().getImage(imageName);
    MediaTracker waitForIt = new MediaTracker(c);
    waitForIt.addImage(image, 1);
    try { waitForIt.waitForID(1); }
    catch (InterruptedException ie) { }
    // Buffering Image
    BufferedImage src = new BufferedImage(
        image.getWidth(c), image.getHeight(c),
        BufferedImage.TYPE_INT_RGB);
    Graphics2D big = bi.createGraphics();
    big.drawImage(image,0,0,c);
    //Edge Detection
    float[] values = {
        0f, -1f, 0f,
       -1f, 4f, -1f,
        0f, -1f, 0f
    Kernel k = new Kernel(3, 3, values);
    ConvolveOp cop = new ConvolveOp(kernel);
    BufferedImage dest = null;
    dest = cop.filter(src, dest);Play around with the values in the kernel to get a better idea of how this all works.

  • How to manage slow and fast moving goods in Demand Planning

    Hi All,
    Kindly let me know how to manage slow and fast moving goods in Demand Planning.
    1. First how to detect slow and fast moving goods
    2. which DP model to use
    3. Any Best practice while dealing with slow and fast moving goods
    Thanks
    Arun

    Arun,
    There are two main concerns with forecasting slow moving goods. For fast moving, yes, trend, level and seasonality combined with a decent demand level gets you through.
    So, either you don't have demand in the market for that product; if the product is not critical (like go/no go spare parts) I recommend just averaging out the demand over a period and going with that. Any errors incurred there won't have a significant business impact. On the other hand if it is critical, safety stock and/or contractual obligations come to play.
    OR, if there is market demand but your market share is too low for whatever reason, then it gets tricky because you need a predictor variable(s) for which you have good knowledge off (such as CPI index, demand for related products, etc). The forecast is then done on the predictor variable(s) and the low market share applied to 'translate' the expected demand to the product in question. Unfortunately, the case of low demand/low market share I have no idea of how to implement it on SAP. Since forecasting is just the tip of the iceberg, it may pay to do it outside of SAP (say in Excel) and manually input it (unless the products are many and varied).
    Hope this helps. Please let me know if this makes sense and/or applies to your question.
    Rodrigo

  • Negative Edge Detection DI

    Hello,
    I want to read in several counter inputs in my USB-6008 but this has only one counter input.
    The frequency of the pulses (to read) isnt high so i thought maybe i could use a digital input. When the input is high, a certain value has to be incremented. The only problem is that the high-level of the pulse takes to long and the value is incremented more than once. To solve this problem I think I need an edge detection that generates a pulse when a negative edge has occured. Can somebody help me to solve this problem?
    Greetings,
    Kevin Duerloo
    KaHo Sint-Lieven, Belgium

    Hi,
    There is no change detection available on the digital lines of these USB devices. So you will not be able to trigger on a Rising edge or Falling edge by using the Digital lines. The only thing you could do is to use the digital trigger line. So, create a program that starts an acquisition on a digital trigger (PFI0) and set it to Rising or Falling. Put this task inside a loop. Everytime a Rising edge occurs, the acquisition will start and each time the dataflow arrives at the DaqmxBaseRead.vi/DaqmxRead.vi you can add some code that increases a value. Put a sequence around the Read.vi and add the code. This is just a workaround. You can test if this good enough for you.
    Regards.
    JV

  • Automating Edge Detection?

    I have a question about the IMAQ edge detection VI. I am designing a VI that will be counting edges in an image many times in succession. My question is that since the area in question will not change as the image changes, is it possible to just measure along the same line every time? Can I select a start and end pixel location for the edge detection? At this point I have to click and drag a line and it is very tedious for the amount of images that will be measured.
    Thanks,
    Mack

    Hi Mack24, 
    Which tool are you using for edge detection? The shipping example "Edge Detection.vi" allows you to specify a ROI of a line to detect edges against. 
    The example can be found under Toolkits and Modules->Vision->Caliper->Edge Detection.vi
    -N
    National Instruments
    Applications Engineer

  • Edge Detection & Edge Coordinates

    I need to do edge detection and get the edge coordinates from that operation. It's been awhile since I've done stuff like this so I'm wondering if maybe the JAI has this? If not, what should I look for in Java 2D (although at that point I'm writing from scratch right?)
    I'm not asking for homework here, but I am very busy and would like to be pointed in the right direction quickly. Thus, the 10 dukes as compensation. :-)

    anybody?

  • Is Java ME capable to edge detect images taken from phone camera??

    I need help with creating a working edge detection method. At the moment as it stands I have the edge detection algorithim and have transformend it into code but I cannot see why it keeps returning a blue screen, Can Anyone tell me what I am doing wrong and how to fix it::
    public Image edgeDetect(Image colourImg){
              int width = colourImg.getWidth();
             int height = colourImg.getHeight();
             int[] imageDataX = new int[width* height];
             colourImg.getRGB(imageDataX, 0, width, 0, 0, width,height);
             Image greyImg = Image.createImage(width,height);
             Graphics g = greyImg.getGraphics();
             int[] soblex = {-1, 0, 1, -2, 0, 2, -1, 0, 1};
             int dooplex = 0;
             int dooplex1 = 0;
             int dooplex2 = 0;
             int doople = 0;
             int doople1 = 0;
             int doople2 = 0;
             int count = 0;
             int[] sobley = {1, 2, 1, 0, 0, 0, -1, -2, -1};
             int doopley = 0;
             int doopley1 = 0;
             int doopley2 = 0;
             int count2 = 0;
             for(int y=1; y<height-1;y++){
                 for(int x=1;x<width-1;x++){
                      if(doopley == 0 || doopley ==height-1){doople = 0;}
                      else if (dooplex == 0 || dooplex == height-1){doople = 0;}
                      for(int q = -1; q<=1; q++){
                           for(int r= -1; r<=1; r++){
                                int c = imageDataX[((q+y)*width+(x+r))];
                                int redx = (c&0x00FF0000)>>>16;
                                int d = imageDataX[((q+y)*width+(x+r))];
                                int bluex = (d&0x0000FF00)>>>8;
                                int e = imageDataX[((q+y)*width+(x+r))];
                                int greenx = e&0x000000FF;
                                dooplex +=  redx * soblex[count];
                                dooplex1+=  bluex * soblex[count];
                                dooplex2+=  greenx * soblex[count];
                                int f = imageDataX[((q+y)*width+(x+r))];
                                int redy = (f&0x00FF0000)>>>16;
                                int G = imageDataX[((q+y)*width+(x+r))];
                                int bluey = (G&0x0000FF00)>>>8;
                                int h = imageDataX[((q+y)*width+(x+r))];
                                int greeny = h&0x000000FF;
                                doopley += redy * sobley[count2];
                                doopley1+= greeny * sobley[count2];
                                doopley2+= bluey * sobley[count2];
                                count++;
                                count2++;
                      count= 0;
                      count2=0;
                      //MainMenu.append(" / " + doople);
                      //MainMenu.append(" / " + doople1);
                      //MainMenu.append(" / " + doople2 + "\n/*********/\n/********/");
                      doople = Math.abs(doopley)+ Math.abs(dooplex);
                      doople1 = Math.abs(doopley1)+ Math.abs(dooplex1);
                      doople2 =Math.abs(doopley2)+ Math.abs(dooplex2);
                      doople = (int)Math.sqrt(doople);
                      doople1= (int)Math.sqrt(doople1);
                      doople2 = (int)Math.sqrt(doople2);
                      //MainMenu.append(" / " + doople);
                      //MainMenu.append(" / " + doople1);
                      //MainMenu.append(" / " + doople2 + "\n/$$$$$$$$$/\n/$$$$$$$$/");
                      if(doople>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(doople, doople1, doople2);
                      if(doople1>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople1<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(doople, doople1, doople2);
                      if(doople2>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople2<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      dooplex = 0;
                      dooplex1 = 0;
                      dooplex2 = 0;
                      doopley = 0;
                      doopley1 = 0;
                      doopley2 = 0;
                      doople = 0;
                      doople1 = 0;
                      doople2 = 0;
                      imageDataX[y*width+x] = g.getColor() & 0xFFFFFFFF;
             greyImg= Image.createRGBImage(imageDataX,width,height,false);
             return greyImg;
         }I think that the problem lies in the two areas where I take the RGB information from the pixel and try to put them back into an graphics that later get turned into an Image::
    g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ffand
    for(int q = -1; q<=1; q++){
                           for(int r= -1; r<=1; r++){
                                int c = imageDataX[((q+y)*width+(x+r))];
                                int redx = (c&0x00FF0000)>>>16;
                                int d = imageDataX[((q+y)*width+(x+r))];
                                int bluex = (d&0x0000FF00)>>>8;
                                int e = imageDataX[((q+y)*width+(x+r))];
                                int greenx = e&0x000000FF;
                                dooplex +=  redx * soblex[count];
                                dooplex1+=  bluex * soblex[count];
                                dooplex2+=  greenx * soblex[count];
                                int f = imageDataX[((q+y)*width+(x+r))];
                                int redy = (f&0x00FF0000)>>>16;
                                int G = imageDataX[((q+y)*width+(x+r))];
                                int bluey = (G&0x0000FF00)>>>8;
                                int h = imageDataX[((q+y)*width+(x+r))];
                                int greeny = h&0x000000FF;
                                doopley += redy * sobley[count2];
                                doopley1+= greeny * sobley[count2];
                                doopley2+= bluey * sobley[count2];
                                count++;
                                count2++;
                         }Thankyou in advance

    ...it keeps returning a blue screen...
                      if(doople>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(doople, doople1, doople2);
                      if(doople1>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople1<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(doople, doople1, doople2);
                      if(doople2>=255)
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
                      else if (doople2<=0)
                      {g.setColor(0&0x00ff0000<<16, 0&0x0000ff00<<8, 0&0x0000ff);
                      else
                      {g.setColor(255&0x00ff0000<<16, 255&0x0000ff00<<8, 255&0x0000ff);
    as far as I understand, above code is equivalent to:
                      if(doople>=255)
                      {g.setColor(0, 0, 0xff); // blue
                      else if (doople<=0)
                      {g.setColor(0, 0, 0xff); // blue
                      else
                      {g.setColor(doople, doople1, doople2);
                      // no matter what was above, color will be defined below:
                      if(doople1>=255)
                      {g.setColor(0, 0, 0xff); // blue
                      else if (doople1<=0)
                      {g.setColor(0, 0, 0xff); // blue
                      else
                      {g.setColor(doople, doople1, doople2);
                      // no matter what was above, color will be defined below:
                      if(doople2>=255)
                      {g.setColor(0, 0, 0xff); // blue
                      else if (doople2<=0)
                      {g.setColor(0, 0, 0xff); // blue
                      else
                      {g.setColor(0, 0, 0xff); // blue
    //......which in turn is equivalent to:
                            // no matter what was set above, color will be defined below...
                      if(doople2>=255)
                      {g.setColor(0, 0, 0xff); // blue
                      else if (doople2<=0)
                      {g.setColor(0, 0, 0xff); // blue
                      else
                      {g.setColor(0, 0, 0xff); // blue
    //......which finally becomes g.setColor(0, 0, 0xff); // blue

  • Analyze video with edge detection

    hello..
    i'm new to LabVIEW 7.1 and vision assistant 7.1. In my assignment, i needed to analyze a traffic video. The method is by edge detection. But i have no idea at all how to do it. So, please help me. I need to pass up this assignment before beginning of September. How i going to start this analysis??What tutorial i need to study?What step need to do to analyze the video with edge detection?Is it necesary to do use Vision Assistant together?
    thanks for help!!!

    Hi Chin,
    Thank you for posting on the NI forums!  I noticed you created two forum posts with the same question.  The link below points to the thread in which I will help you. Please refer to it for future posts and replies.
    http://forums.ni.com/ni/board/message?board.id=170&message.id=266560
    Thank you,
    Maclean G.
    Applications Engineering
    National Instruments

Maybe you are looking for

  • Mac OS X Lion too slow

    Hi, my Mac OS X lion was too slow. When i tried to repair my HDD. I was getting the following Verifying permissions for “Macintosh HD” Permissions differ on “System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Rem ote Desktop M

  • Error when trying to se smtp over ssl

    Hi all , I have a webdynpro application that sends mail using smtp over ssl . Ihvae imported the ca certificate to trused ca key store , but when I run the application I get the following error : javax.mail.MessagingException: Exception reading respo

  • Firefox crashes on Windows 8

    I'll be doing common things (Facebook and youtube recently) and firefox will randomly crash every ~15 minutes more or less. I have had this computer and windows 8 for about two months and have had no problems prior to this, Internet Explorer still wo

  • HEMP : Wifi Grisé suite mise à jour IOS

    Bonjour C'est mon premier poste sur ce forum et je suis très embetté. J'espère que vous pourrez m'aider. J'ai un IPOD TOUCH 3G en IOS 5.0.1 qui fonctionnait très bien en 4.3 et depuis la mise à jour en 5.0.1 je n'ai plus de WIFI. Impossible de l'acti

  • Apple TV requires regular regular complete sync.  Why?

    I'm having trouble understanding the syncing. I have been fine for about 4 months but recently the syncing has started to play up. Often after full re-start my iTunes/Apple TV sync breaks down. iTunes appears to see the Apple TV but tries to re-sync