Clever particle filter

Hello,
I am trying to solve a task, where I need to exclude blobs in binary image based on the blob size,
the size treshold changes with the blob position.
Since this is not doable through the standard particle filter, I made mask out of results.
The algorithm is not fast enough + it keeps bad blobs if the later bounding boxes overlap them.
Please suggest some other method.
Bublina
Solved!
Go to Solution.

Hi Bublina!
Your method is a really good start, I would like to add a few ideas to it. First of all, you can add many objects (rectangles, ovals, points, etc.) to a single ROI descriptor, so it's easier to have all removed objects listed in a single ROI, then you can do the masking all at once. You can also fasten execution by using Particle Analysis.vi, which only performs the measurements selected by you, not all of them.
Here's some example code of these changes: (sorry for the picture instead of a snippet, my LV is acting up a bit)
Just replace the code at the red arrow with your decision-making code and it should work.
As you can see it is still quite a simple filter (it masks bounding rectangles), so if you have strangely shaped particles or your particles are too close together, some useful data might be erased. So if you need a more precise approach you could go by labeling your particles (so assigning a different color to each one), then deleting colors from the image based on criteria you specify.
I hope this helps.
Best regards:
Andrew Valko
NI Hungary
Andrew Valko
National Instruments Hungary

Similar Messages

  • Particle filter no different? try to ignore particles less or equal 2 pixels

    I have an image, I use Vision builder to identify particles,
    Attached are the VIs generated by V Builder and image.jpg. I run the particle analysis with and without particle filter, the number of particles is 17 no matter what?
    Attachments:
    image10.jpg ‏97 KB
    Vision_Builder_white_filter2.vi ‏92 KB

    Your parameters for the particle filter are reversed. If you swap the 0 and 2, everything works fine.
    One interesting observation is that the upper limit is not included, so these settings only eliminate single pixel particles. To eliminate 2 pixel particles, you need the upper limit to be 3 (or it could be 2.5, etc.).
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Particle filter 2

    I've got lv7 and vision7 installed - when I am trying to create a lvVI from a script in Vision Assistant the builder cannot find the VI: "Particle Filter 2" and asks for help. Where can I find this VI?

    Fixed the problem, the installation had some problems removing the old vision edition, but after removing it manually and installing the new version it worked fine..

  • How to extract X,Y-coordinates after Particle Analysis

    Hi there,
    I'd like to use Particle Analysis for a kind of motion detection. My problem is I can't get the information out...
    How do I get the X- and Y- coordinates (center of mass) of a particle (of a certain areal measurement in case there are more than one particle)?
    For example:
    Area between 100 and 150... What is X and Y
    Thanks for help

    Hello seasoo,
    Have a look at the attached vi.
    For your analysis, the IMAQ Particle Analysis Report vi will be appropriate. To get all the values you use the FOR loop except if you have a prior knowledge of the number of particles you will get. You then use the values as needed - compare to your range.
    Use the IMAQ Particle Filter 3 to isolate particles based on your criteria.
    Hope that helps.
    Attachments:
    oval 1.jpg ‏25 KB
    Untitled 1.vi ‏110 KB

  • IMAQ - particle recognition

    I've been working on this problem for a while, and am having trouble finding an answer.  I'm working with Vision Assistant 8.0 and Labview 8.0.
    I have an image (I16 Greyscale, exported to file as a jpeg2000 file).  I'm trying to apply a threshold filter to isolate a single particle (highest intensity), then find the centroid of that particle.  In order to do this, I adjust the threshold until there is only one particle left.  For some reason, Labview doesn't like the combination or the sequence of filters that I choose to use.  One of two things are happening...
    - The palette setting for the output image (binary) is conflicted with the image input.
    - Or my particle filter is measuring 4 particles, when there is only one visible particle.
    I put some notes on the vi, to describe what/how I want my program to operate.  Also as a side note, you need to specify the location of the attached image (grey.jpeg) before you run the program.
    Attachments:
    grey.jpg ‏3 KB
    centroid processing16bit.vi ‏77 KB

    Hello System_Eng,
    Thank you for using National Instruments discussion forums.  After looking at your Vision code I believe I understand what was happening.  When ever you iterated your while loop you probably thought that both the image src and image dst for the inputs in to the threshold VI were being reinitialized each time.  When you use IMAQ create you are actually allocating memory to hold your image information.  So when ever you change that image information it changes it in memory.  When you use the Threshold VI both the image src and the image dst must be of the same type.  So when you casted your image into an 8 bit format later in the while loop it stayed an 8 bit image when the loop iterated.  What you need to do is at the beginning of the loop cast it into a 16 bit image each time.  I have attached a screen shot of how I modified your code.  I hope this helps.  Let me know if it does not make sense or you are still unclear.  Thanks and have a great day.
    Regards,
    Mark T
    Applications Engineer
    National Instruments
    Attachments:
    Isolate Particle.jpg ‏109 KB

  • How to select the particles after using the Particle Analysis

    Can I select the particles after the particle analysis directly, like chosing #1, #2 ..., or should I run the particle filter using the measured parameter?

    Hello Ishi,
    I now have a much better idea on what you are trying to do.  If you run the IMAQ Particle Analysis Report VI on the thresholded image that contains particles, an array of clusters will be returned that will contain specific information for each particle including the area, number of holes, bounding rectangle coordinates, center of mass, orientation, and dimensions. 
    To find out which particle is which in the image, index the Particle Reports (Pixels) output of this VI, unbundle the cluster to extract the Bounding Rectangle cluster, and then pass it to the Rectangle input of an IMAQ Overlay Rectangle VI.  This VI will overlay a bounding rectangle onto the image to show where the particle is located.  You can specify which particle you want highlighted by changing the array index of the Index Array VI that is responsible for extracting a specific cluster of particle information from the Particle Reports output of the IMAQ Particle Analysis Report VI.
    Regards,
    Mike T
    National Instruments

  • How does image borders affect calculatio​n of "center of mass" of particles?

    Hi!
    I have a question about how borders in the images affect calculation of "center of mass".
    My original image (0) is a 640x480 grayscale image with border of 3.
    I make a binarized image (1) of the original image (0) to search for particles. For each particle I find, I use the ROI for the particle to extract a third image (2) from the original image (0).
    All of the images have borders of 3.
    When using the ROI from particles in image (1), are the boundingBox-coordinates related to the outer edges of the image (which makes the image actually 646x486), or is it related to the original origo of the image (making the image "logically" 640x480)?
    The same question goes for calculation of
    center of mass for the particle, when using image (2). After running a particle filter on image (2), I use imaqCalcCoeff() to calculate IMAQ_CENTER_MASS_X and IMAQ_CENTER_MASS_Y, stored in local_x and local_y. To get the global_x and global_y, which is the center point of the particle related to the origo of image (0), I say that:
    global_x = local_x + boundingBox.left;
    global_y = local_y + boundingBox.top;
    Do I need to make further adjustments since I am using borders, e.g. adding/extracting 3 from the global_x and global_y to have a correct answer, or is IMAQ vision taking care of this for me?
    Do I have to actually define that:
    global_x = local_y + boundingBox.left - borderSize;
    or
    global_x = local_y + boundingBox.left - 2*borderSize;
    I guess (and hope) that the answer is no, that I can use:
    global_x = local_x + boundingBox.left;
    -but I need it confirmed...
    Thanks,
    Torbjørn

    The image border is only used for processing (filters, etc.) It is not included in any coordinates or measurements. Therefore, don't include it in any calculations.
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Image processing on matched patterns

    Hi all
    I am using LabVIEW 7.1 for image processing. I have used the pattern matching example to suit my requirements.
    Now i want that whatever image processing operations i perform after pattern matching, like, threshold, circle detection, particle filter etc.. be
    performed only on those patterns which have been recognised & not on the entire image. How can this be done?
    Thanks all in advance

    Hi James,
    I suggest you to Extract that image from the main image by using ROI tools,
    and thereby the you are left with the cropped image for your analysis.
    Use this and paste this on a new blank image for your processing.
    Regards,
    Sundar.

  • How to do detection of pulsating light (eg torch light)

    Hi all,
    I am trying a application where i need to do intensity based object detection. The inputs provided for the algorithm are threshold intensity, object size(minimum 3 to 100 pixels) and  pulsating (that is torch light is on in one frame and off in next frame). 
    I have already tried imaq threshold, particle filter and particle analysis to perform detection and draw bounding box for the detected object. But my problem is how to extend this algorithm for marking bounding box for only pulsating (blinking) light object.
    I have thought of frame differencing but i dont know the way to do it in labview and i am worried of the lag in the output video.
    So kindly help me out with better idea to achieve it.
    with regards,
    Sri

    Hello,
    in this case you could try the following:
    1. take a reference image when all stationary lights are ON and the blinking light is OFF,
    2. in every frame, subtract the reference frame from the current frame,
    3. threshold the image,
    4. (do some morphological operations to filter out the noisy pixels, e.g. erosion),
    5. if the blinking light is on, the thresholded image should show a distinct region that is the result of image subtraction.
    Anyway, if your blinking light is stationary, why not thresholding only in the region of the blinking light? You could select the region of interest once at the beginning or is this not a possibility?
    Best regards,

    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Can anyone help me out to find the diameter of the hole using imaq vision?

    have to measure the diameter of the holes and generate the report in the format given belowI have also enclosed the image to be processed)
    example:
    HOLE SPECIFICATIONS PLATE NO. 1
    X Y DIAMETER X Y
    C 8.4 3.79 3.122 8.472 3.675
    D 8.51 3.54 3.117 8.579 3.647
    F 0 0 1.089 0 0
    H 4.08 9.474 1.083 4.045 9.731
    J 3.78 9.86 0.845 3.698 9.957
    can anyone help me out to solve it,, Thanking you in advance!
    Attachments:
    03260005.JPG ‏103 KB

    Hello.
    I have some suggestions for the image processing component of your application. I have attached a Vision Assistant script to this post (Vision Assistant 7.1) which performs essentially the following steps. These steps could be followed in any programming environment and should yield decent results:
    Extract the Luminance plane from the RGB image (simply to get it into 8-bit mono format for the thresholding. Could be done with a color threshold VI).
    Threshold the image so that the bright components are left and the dark is set to 0.
    Perform a particle filter to remove the particles that are too large (the surrounding area).
    Perform a particle analysis on the remaining particles, which should be the holes in the
    plate only. Set the particle analysis to return the Location of the 'Center of Mass X' and 'Center of Mass Y,' and the perimeter (diameter).
    This type of processing would allow the plate to be rotated, have different patterns, etc. and you should still be able to get the information about the holes.
    Hope this helps.
    Regards,
    Colin C.
    Applications Engineering
    Colin Christofferson
    Community Web Marketing
    Blog
    Attachments:
    ExampleHoleScript.scr ‏2 KB

  • LabVIEW Vision Assistant的图像处理,NI视觉助手教程免费阅读

     附件里面比较全,因为发不了那么多字
    http://shixinhua.com/  石鑫华主页
    http://shixinhua.com/imganalyse/2013/09/762.html  全文网站
    《基于Vision Assistant的图像处理是实用教程》
    第二章 界面与菜单       67
    第一节 启动欢迎界面  67
    http://shixinhua.com/imganalyse/2013/01/278.html
    第二节 功能界面  73
    http://shixinhua.com/imganalyse/2013/01/280.html
    Acquire Images采集图像界面       73
    Browse Images浏览图像界面       77
    Process Images处理图像界面       79
    第三节 File文件菜单    86
    http://shixinhua.com/imganalyse/2013/01/283.html
    Open Image:打开图像          86
    Open AVI File:打开AVI文件         87
    Save Image:保存图像  87
    New Script:新建脚本   91
    Open Script:打开脚本  92
    Save Script:保存脚本   92
    Save Script As:脚本另存为  93
    Acquire Image:采集图像     93
    Browse Images:浏览图像   93
    Process Images:处理图像   94
    Print Image:打印图像 94
    Preferences:优先参数选择 95
    Exit:退出       97
    第四节 Edit编辑菜单   97
    http://shixinhua.com/imganalyse/2013/01/284.html
    Edit Step:编辑步骤       97
    Cut:剪切       97
    Copy:复制     98
    Paste:粘贴   98
    Delete:删除 98
    第五节 View查看菜单 98
    http://shixinhua.com/imganalyse/2013/01/286.html
    Zoom In:放大        99
    Zoom Out:缩小     99
    Zoom 1:1:原始图像      99
    Zoom to Fit:适合窗口  99
    第六节 Image图像菜单        99
    http://shixinhua.com/imganalyse/2013/01/288.html
    Histogram:直方图 101
    Line Profile:线剖面图   101
    Measure:测量      101
    3D View:三维视图        101
    Brightness:亮度   102
    Set Coordinate System:设置坐标系    102
    Image Mask:图像掩码         102
    Geometry:几何    102
    Image Buffer:图像缓存        102
    Get Image:获取图像   102
    Image Calibration:图像标定        102
    Image Correction:图像修正         102
    Overlay:覆盖         102
    Run LabVIEW VI:运行LabVIEW VI       103
    第七节 Color彩色菜单 103
    http://shixinhua.com/imganalyse/2013/01/289.html
    Color Operators:彩色运算  104
    Color Plane Extraction:彩色平面抽取          104
    Color Threshold:彩色阈值   104
    Color Classification:彩色分类      105
    Color Segmentation:彩色分段     105
    Color Matching:彩色匹配    105
    Color Location:彩色定位      105
    Color Pattern Matching:彩色模式匹配       105
    第八节 Grayscale灰度菜单 105
    http://shixinhua.com/imganalyse/2013/01/291.html
    Lookup Table:查找表    107
    Filters:滤波  107
    Gray Morphology:灰度形态学     107
    Gray Morphological Reconstruction:灰度形态学重建        107
    FFT Filter:快速傅立叶变换滤波  107
    Threshold:阈值     107
    Watershed Segmentation:分水岭分割        107
    Operators:运算    107
    Conversion:转换  107
    Quantify:量化       108
    Centroid:质心       108
    Detect Texture Defects:检查纹理缺陷       108
    第九节 Binary二值菜单        108
    http://shixinhua.com/imganalyse/2013/01/293.html
    Basic Morphology:基本形态学    109
    Adv. Morphology:高级形态学      109
    Binary Morphological Reconstruction:二值形态学重建     109
    Particle Filter:粒子滤波        109
    Binary Image Inversion:二值图像反转        110
    Particle Analysis:粒子分析  110
    Shape Matching:形状匹配  110
    Circle Detection:圆检测       110
    第十节 Machine Vision机器视觉菜单 110
    http://shixinhua.com/imganalyse/2013/01/295.html
    Edge Detector:边缘检测     111
    Find Straight Edge:查找直边        112
    Adv. Straight Edge:高级直边        112
    Find Circular Edge:查找圆边        112
    Max Clamp:最大卡尺  112
    Clamp(Rake):卡尺(耙子)         112
    Pattern Matching:模式匹配         112
    Geometric Matching:几何匹配   112
    Contour Analysis:轮廓分析 112
    Shape Detection:形状检测  112
    Golden Template Comparison:极品模板比较     113
    Caliper:测径器、卡尺 113
    第十一节 Identification识别菜单         113
    http://shixinhua.com/imganalyse/2013/01/296.html
    OCR/OCV:光学字符识别     114
    Particle Classification:粒子分类  114
    Barcode Reader:条码读取  114
    2D Barcode Reader:二维条码读取     114
    第十二节 Tools工具菜单      114
    http://shixinhua.com/imganalyse/2013/02/297.html
    Batch Processing:批量处理          114
    Performance Meter:性能测量     120
    View Measurements:查看测量   120
    Create LabVIEW VI:创建LabVIEW VI代码 121
    Create C Code:创建C代码 121
    Create .NET Code:创建.NET代码        122
    Activate Vision Assistant:激活视觉助手     123
    第十三节 Help帮助菜单       124
    http://shixinhua.com/imganalyse/2013/02/298.html
    Show Context Help:显示上下文帮助  124
    Online Help:在线帮助  125
    Solution Wizard:解决问题向导   125
    Patents:专利         125
    About Vision Assistant:关于视觉助手         125
    第三章 采集图像  126
    第一节 Acquire Image采集图像  127
    http://shixinhua.com/imganalyse/2013/02/299.html
    第二节 Acquire Image(1394,GigE,or USB)采集图像(1394、千兆网、USB) 128
    http://shixinhua.com/imganalyse/2013/02/301.html
    Main选项卡   129
    Attributes属性选项卡   138
    第三节 Acquire Image(Smart Camera)从智能相机中采集图像 143
    http://shixinhua.com/imganalyse/2013/02/302.html
    第四节 Simulate Acquisition仿真采集 145
    http://shixinhua.com/imganalyse/2013/02/304.html
    第四章 浏览图像  150

    Wetzer wrote:
    You can think of the purple "Image" Wire as a Pointer to the Buffer containing the actual Image data.
    That's what I thought too, but why would it behave differently during execution highlighting? (sorry, I don't have IMAQ, just curious )
    LabVIEW Champion . Do more with less code and in less time .

  • Looping particles in motion 4 don't loop in 5

    This is a little hard to explain so bear with me.
    I've long been creating looping particle emitters in Motion 4 using the trick where you set a constant birth rate that suddenly drops to 0 final keyframe at the end, duplicate the element, slide the duplicate element so that the final keyframe is at frame 0, extend the length of the element so that the dying particles for the duplicate creates the overlap for the loop. In other words, the end of original element  is exactly the same as frame 0 of the copy. Works great as long as you don't add any randomness like gravity or wiggles.
    HOWEVER, this method doesn't work in Motion 5! If I bring in a project that I've gotten to loop nicely in 4, it's goofed up in 5. When the element is duplicated, their endings are exactly the same. As soon as I slide the copy backwards on the timeline, its ending changes. The emitter shouldn't be different depending on where it falls on the timeline but in Motion 5 it is. I don't know how to fix this!! Looping backgrounds are my bread and butter so I may not be able to move to Motion 5. Before you suggest the FCP overlapping movies trick, I've done that in the past and it's usually not as good looking and fast as the method I've been using.
    Here's a link to the project in 4 so you can compare them in 4 and 5
    http://www.muellermultimedia.com/images/example.motn
    Any ideas on why 5 is different now?

    Hey Lauri, I've downloaded your project and I see what you're doing - very clever. And I have to say that it's the first project I've had that actually ran faster in Mo5 than Mo4. I was only getting 7fps in Mo4, but 30 in Mo5.
    What I did in Motion 5 to get it to loop properly is to extend the length of the project by 300 frames, slide everything downstream, so the initial period doesn't take place before the project starts, then set my play range to start 300 frames later.
    So it loops, but the working part is within the range of the timeline.
    Here's the modified example:
    http://howtocheatinmotion.com/viewing/example5.zip
    best,
    Patrick

  • In iTunes 10, I could type "Sinatra" in the search file, and would get a list of all tracks with "Sinatra" in any field.   In iTunes 11 I get these clever little windows, with nice arrows, but no lists to view.   What am I missing?

    In iTunes 10, I could type "Sinatra" in the search file, and would get a list of all tracks with "Sinatra" in any field.   In iTunes 11 I get these clever little windows, with nice arrows, but no lists to view.   What am I missing?

    Thanks for chipping in.   I discovered something after trying what you suggested.   I have quite a few collections of hits by year from Time Life and Billboard.  I've eliminated duplicate tracks that appear in both collections (or other CDs for that matter), but cross-reference the CD where I deleted the track and placed in in the comments section of the CD track I retained.   If I "search" by song name, only the remaining track appears.   But if I want to hear for example Classic Rock 1964, only those tracks remaining would be there when I pull up that CD.   So, I type "Classic Rock 1964,"  in the search field.  First the boxes on the right of the screen open up showing album icons.  Showing four tracks by album with a button to view 10 more, then four songs with an option to vies 18 more.   I finally noticed that at the top of the boxes is a blue band that reads, :Show Classic Rock 1964 in Music.  When I double click on this blue band, all 24 tracks from the original CD appear in the song list format even though I had deleted two of them because they appeard in a Beach Boys CD.   On those tracks, I had referenced Classic Rock 1964 in the comments field.    So, bottom line, Search will also look in the comments field if you click "filter by all" in the magnifying glass to the left of the search field.   And you can move all tracks that if finds into a song list by double clicking on the blue band.

  • Why is the grain filter so slow?

    The "Add Grain" filter performs so poorly that I'm wondering if it was written in the 90s and never touched again. On a very short composition with a few other adjustment layers, doing a RAM preview takes about 20 seconds. If I enable the adjustment layer that has the grain filter, each frame takes like 3 or 4 seconds and the whole composition takes about 3 minutes to render. This is on an i7 3930k with 32 GB of RAM which flies on anything else. Looking at the task manager, when it's rendering it doesn't even get to 10% CPU, so I'm guessing it's a very poorly coded filter, unless it's using only the GPU, but when anything uses the GPU I can feel its fan revving up, and it's not the case here. This is with CS6 by the way.

    How hard can it be to optimize the same filter to utilize all the CPU power, or at least most of it?
    It would take a complete rewriting of the underlying operation of AE. AE processes a pixel at a time and performs mathematical calculations on each pixel in the frame and combines those calculations with each pixel in the frame of the next layer. It does this a frame at a time.
    If you have a 2K frame with 10 layers then that's 20K of data that needs to be processed. If it's temporal effects that are looking a frame a head and a frame back then you're stuck with 20 K of data calculations a frame at a time added to the additional 20 K of data from the next or last frame. That's not very much work for 16 CPU's to do. It's why AE doesn't run at 100% when some 3D apps can. Totally different math threads.
    The third party solutions for grain that work better still have these limitations, but they have just come up with faster or better ways to make the same calculations that are more efficient.
    Asking AE to use 100% of the processor power and 100% of the memory is like asking a 10 inch water main to stay completely full when you've only got a gallon of water in the system and you can only access it through a 1/2 inch pipe.
    I'm not making excuses, just trying to explain how AE works. Can Adobe make it better? Sure, for the most part they have with every release that I"ve used in the last 20 years. If you have a good knowledge of how AE processes the pixels then it will help you make workflow and production decisions that make your work more efficient and profitable. It's all part of being a pro.
    BTW, Flame, Inferno, all of the other big compositing apps have the same limitation. Only when rendering something more than pixel math does the system start to max out... Try AE with a particle system like Particular and 20,000,000 particles and you'll see CPU and Memory use go up.

  • CURRENT_DATE-1 in filter

    Hi All,
    I want to pull out 1 day old data from OBIEE.
    I can use filter SELECT CURRENT_DATE FROM "Transaction Data" and this gives me output for today.
    But when I use SELECT CURRENT_DATE-1 FROM "Transaction Data", this gives me error.
    Any pointers how can I achieve this?
    Regards

    Change the filter to SQL and add:
    SELECT TIMESTAMPADD(SQL_TSI_DAY, -1 ,CURRENT_DATE) FROM "Transaction Data"
    This should work fine. However, if you wanted to be really clever you could create a new initialization block in the RPD and add this statement to a variable called "YESTERDAY" and then your users would just need to remember the variable name rather than the SQL.
    Thanks,
    Chris R

Maybe you are looking for