Defining Centroid with Vision AI in VB

I'm usnig NI Vision Automated Inspection.  Any centroid related help that I've found references the centoid function for the development software.  
Can anyone point me in the right direction for using NI Vision AI to write VB code to acccomplish this? 
I've already gotten the acquisition up and running. 
Thanks, 
Bryan 

It sounds like there is some confusion about Vision Builder AI and the Vision Development Module (where Centroid2 can be used). Vision Builder AI is a configurable environment for designing Vision applications. It has a step called Detect Objects that allows you to specify threshold parameters and particle filtering data to remove unwanted objects and it returns data for all the resulting objects found (including the center of mass). You can control a Vision Builder AI engine that is running on your local machine or on a remote target using the Vision Builder AI API. We recently released C support for this API in Vision Builder AI 2011, which includes a dot net example for C# and VB. This would allow you to retrieve the results for the detect object step, but the algorithm would run in the Vision Builder AI engine. Using this method would tyopically mean that all your image processing/acquisition happen within Vision Builder AI, and you just use Visual Basic to control when the Vision Builder AI engine acquires an image and processes it, but you couldn't control which algorithms it runs since this can only be defined from within the Vision Builder AI environment when you create an inspection.
The Vision Development Module is a set of vision algorithms that are meant to be called from programming languages like LabVIEW, CVI, VB, MSVC, etc. It sounds like you are using this library in Visual Basic, and if this is the case, you don't need Vision Builder AI to do the detect object step since this will go through many other layers of SW to accomplish what you want, and calling the Vision Development algorithms directly will be much more direct. Is there a reason you want to use Vision Builder AI instead of the Vision Development Module algorithms included in the ActiveX control it sounds like you are using? To get help on the ActiveX Vision Analysis library, you can go to Start>> Programs>>National Instruments>>Vision>>Documentation>>NI Vision and open the cwimaq.chm.
If all you're doing is trying to find the center of mass, using the Vision Development Module is much more straight forward than trying to create a Vision Builder AI inspection, which you control from Visual Basic and get the resuls of the inspection to figure out the center of mass from one of the steps in the inspections.
Hope this helps,
Brad

Similar Messages

  • Problem with Vision Builder and LabView: Error -1074395995, File not found

    I have created an inspection with Vision Builder AI 2009 SP1. In it I load some pictures from the HDD and run some checks over it. It works fine. Then I have migrated it to LabView 2010, I get a message that the migration is successfull, but when I run the VI on LabView I get a fail status. I have been checking my program and I have found and error when I try to load the picture from file: Error -1074395995 occurred at IMAQ ReadImageAndVisionInfo
    Possible reason(s): IMAQ Vision:  File not found.
    But I am not sure which file it refers to. At first I thought it was the picture I wanted to load, but I have checked the path and it is correct, so maybe it refers to another thing. What puzzles me most is that it works perfectly on Vision Builder and I have changed nothing in LabView, maybe it is a problem with the migration. Any clues?

    Here's a screenshot of what I'm talking about. Put a break point here and see what the path is and compare that with what Vision Builder AI is using (described where to find this in the previous post). You could find where this global is used to make debugging easier.
    Hope this helps,
    Brad
    Attachments:
    Generated Code.png ‏43 KB

  • Calling a dll created with vision 7.0 from visual basic doesn't work

    I created a dll using some vision 7.0 functions and Labview 7.0 environment. I call this dll from visual basic. The dll will work fine as long as I don't stop the visual basic program execution. As soon as I stop the program execution the dll will no longer work. I must reload the visual basic environment and the call to the dll will start working again. I created the exact same dll but I created it with vision 6.0 in Labview 6i environment. It works great and has no issues.
    Attachments:
    Test.vi ‏36 KB

    Roberto N. wrote:
    Thank you Jordan, I'm using Labview 7.1.
    Anyway I've resolved the problem by adding the "lvanlys.dll" file (present in "..\Labview 7.1\resource\lvanlys.dll" path) as support file in the building process. Now the DLL containing the analysis functions works correctly.
    Natalino Roberto
    Ok, you probably got lucky since the lvanlys.dll seems to implement that function directly. However most Advanced Analysis functions are just redirected by lvanlys.dll to the Intel Math Kerneal Library that gets installed with LabVIEW 7.1 and higher. The only way to get that properly installed with your LabVIEW executable or DLL is to create a LabVIEW installer in your Application Builder and make sure to select under "Installer Settings->Advanced" the "LabVIEW Run-Time Engine" and the "Analyze VIs Support". Then use that installer to install your DLL on another computer.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Extract red image with Vision Builder Automated Inspection

    Hello
    I have 2 camera connected to a CVS and I want to process the images to have only the red part of the images.
    for that I tried to use VBAI but I din't sucess, I think nit should be in the vision assistant step, but I already tryed with "extract a color plane" and "treshold color", and when I test with an image it don't work.
    Does anybody know how to that?
    Thank you for your help

    Hello
    I may be find a way to delete everythimg that is not red by substract green and blue with vision assisatant, it seems to work.
    Now I try to extract only little part of the picture, only the part of the picture who are red, for that I created ROIs for some red parts of the images, and I want to record only this ROIs, to reduce the size of the file and kipping the same resolution.
    Does anybody know how to do that?
    I attach my Vision builder code to the message
    Thanks for your help
    Attachments:
    Color Example red-21-7-12h30.vbai ‏108 KB

  • How to define  parameters with radio button

    HI
      How to define  parameters with radio button, but that radio button should display in front of the variable name not after variable name. and under that radio button variable I hve to define parameters, select-options and some other radiobutton varibles.
            I don't know how to paste figures here, otherwise i will provied u the figure for more details.
    Regards.

    PARAMETERS : r1 RADIOBUTTON GROUP radi.
    Go to --> text elements --> selection text
    R1    <your text>
    You can change the program selection screen lay out in screen painter (se51) .
    screen number for your selection screen is 1000.
    Regards,
    Santosh reddy
    Edited by: Santosh Reddy on Dec 9, 2008 11:21 AM

  • How to Insert a User Defined Column with CheckBox In ADF Swings

    I have create JTable by dragging a view object from Data Control Palette .But apart from columns listed in JTable i need to insert a User Defined column with Check Box selection option in each row .
    Can any one help me out to sort the problem.
    Regards
    Bhanu Prakash
    Message was edited by:
    user579125

    I retrieve 10 column from Database and build JTable. But apart from 10 column displayed i need to display one more column( User Defined) at run time with the check box option for each row . Because on select of corresponding check box i need to do some mathematical calculation. add also How do i know whether check box is selected or not.
    Regards
    Bhanu Prakash

  • Must I also log fault with Vision if Internet is d...

    Just a quick question if our internet is down again, should I log a fault with Vision as well.
    The internet now has gone down 3 downs since the 24th July with each time being off for a considerable amount of time and after the outage today I've been looking at the T&C for Vision and noticed that it states they refund if loss of service affects VOD via the Vision box of which it currently is and you have 30 days to claim.

    just an update - had a call about this and got multiple different answers, one minute getting a refund (even in an email it states will pass to CS for a refund) next minute i'm not. As I hadn't logged it at the time they hadn't any record of the fault, yet on their T&C's it states you can report an issue within 30 days of which I did.
    Surely when someone reports a fault for their internet and they have other services via BT that rely on those services that it would be marked on the system.
    So if anyone has BT Vision and your internet is down for a period of time thus affecting your service, report both issues to BT as it seems one department doesn't talk to the other department

  • User-defined rules with SPIN (SPARQL CONSTRUCT)

    Hi,
    We are looking at SPIN as an alternative to define and execute user-defined rules. It is very expressive and in that point looks superior over Jena, SWRL and Oracle type of user-defined rules with IF (filter) -> THEN type of syntax. Although, SPIN is TopQuadrant's, it is entirely SPARQL, and Oracle supports SPARQL CONSTRUCT via Jena Adapter. TopBraid Composer provides and excellent tool support and rule editor for SPIN rules as well.
    There is no problem to execute SPIN rules via Jena Adapter, and I believe even via TopQuadrant's SIN API, which is TopQuadrants's SPARQL based infrence engine's API.
    My question is about whether Oracle has looked into supporting SPARQL CONSTUCT based user-defined rules in its native inference engine?
    Do you have a recommendation for how to use SPIN based user rules in combination with Oracle inference today?
    Thanks
    Jürgen

    Hi Jürgen,
    We are actually looking into a general mechanism that allows users to plug in their own queries/logic during inference. Thanks very much for bringing SPIN up. This general mechanism is very likely going to cover CONSTRUCT queries.
    To extend the existing inference engine using the existing Jena Adapter release, one possible way is as follows.
    1) Assume you have an ontology (model) A.
    2) Create an empty model B.
    3) run performInference on both A and B using OWLPrime.
    4) run SPARQL CONSTRUCT queries against A, B and inferred data
    5) store the query results (in the form of Jena models) back into model B.
    6) If the size of model B does not change, then stop. Otherwise, repeat 3)
    Note that model B is created to separate your original asserted data from inferred data.
    If you don't need such a separation, then don't create it.
    Thanks,
    Zhe Wu

  • No Image - 1473R With Vision Assistant

    Hopefully this is a simple fix and I'm missing something very obvious, so here is what's up. I'm originally a text programmer but for this project I'm stuck using LabVIEW which is completely unfamiliar; I've been working on this for days with no progress so I thougt I'd see if anyone had some pointers. The goal of the project is to use the PCIe-1473R FPGA to do live gain control, overlay, and maybe some recognition.
    I started with the "Image Processing with Vision Assistant on FPGA" example and made a few simple changes to just attempt to get a video feed through. The camera we are using is a Pulnix TM 1325 CL, which outputs a 1 tap/10 bit Camera Link signal. Since this example VI is originally configured 1 tap/8 bit I changed the incoming pixel to be read as 1 tap/10 bit and compiled and tested. When I try to start video acquisition I get no errors but no frames are grabbed. The frame's acquisitioned count does not increase and nothing is displayed. If I switch to line scan I get a scanning static image, but this is not a line scan camera and my other NI frame grabber card shows an image from the camera fine.
    I wasn't all that surprised with this result, as the input is 10 bit and the acquisition FIFO and DMA FIFO are both 8 bit orginally. So, I changed them to U16 and also changed IMAQ FPGA FIFO to Pixel Bus and IMAQ FPGA Pixel Bus to FIFO blocks on either side of the Vision Assistant to U16. With this configuration, I again get no image at all; same results. I suspect this is because the incoming image is signed, so the types should be I16 instead. However, there is no setting for I16 on the Pixel Bus conversion methods. Am I not understanding the types happneing here or is there an alternative method for using Vision Assistant with signed images? I'd think it'd be odd not have support for signed input.
    Anyway, I've tried all the different combos of settings I can think of. Does anyone have any input? I feel like it must be either a buffer size problem or a signing problem, but I don't really know. Any and all input is welcome!
    Thanks for helping out a new guy,
    Kidron

    I ended up resolving this issue by switching cameras. The end goal was to use a FLIR SC6100, so I switched to it and was able to get things working shortly. The FLIR does use unsigned ints, so I believe that's what resovled the issue for anyone running into this in the future.

  • How to raise a self-defined event with parameters in program

    Hi,all,
    I want to raise a self-defined event with parameters in one of my method ,how to do that?Thanks

    Hi Ray,
    If your event name is MYEVENT and the parameter name is MYPARA and the value you want to pass to this parameter is L_PARAVALUE, use the following -
    wd_this->fire_myevent_evt( mypara = l_paravalue ).
    If the event is to be raised in an a view, then you need to get the component controller instance first and use it instead of wd_this.
    Also, this can be obtained directly by clicking on the WD Code Wizard ->Radio button for raise event->selecting your event.
    Regards,
    Neha
    <i><b>PS:Reward if helpful</b></i>

  • Designing Software-Defined Storage with Windows Server – we've got a calculator

    Designing Software-Defined Storage with Windows Server – we’ve got a calculator and a doc for thatYou can use this guide and the Software-Defined Storage Design Calculator spreadsheet to design a storage solution that uses the Storage Spaces and Scale-Out File Server functionality of Windows Server2012R2 along with cost-effective servers and shared serial-attached SCSI (SAS) storage enclosures. Storage Spaces is a software-defined storage technology that enables you to virtualize storage by grouping SSDs and hard disks into storage pools and then creating high-performance and resilient virtual disks, called storage spaces, from available capacity in the pools. You can then place Cluster Shared Volumes (CSVs) and file shares on these virtual disks, which in turn host data for your workloadsDeveloped in response to direct customer...
    This topic first appeared in the Spiceworks Community

    Hello Marc,
    Do you mean than you want everyone have their own personal share folder? And need to limit the folder size?
    I think you can use the FTP server function in Windows server.
    About FTP, it is recommended to ask in the Windows server forum, the professionals there will be glad to help you.
    https://social.technet.microsoft.com/Forums/en-US/home?forum=winserverfiles
    Thanks for your understanding.
    Best regards,
    Fangzhou CHEN
    Fangzhou CHEN
    TechNet Community Support

  • Must load a Matrix or array with 90 individual 16 bit numbers, then do matches with Vision inspection result.

    What is the best way to load 90 16 bit numbers into an array or matrix, and then compare those 90 values to a Vision system result?
    The operator manually enters the value into a display, assigns the value to a memory location (Bin #) and clicks "Enter".
    Each of the 90 locations will have a stored value in it.
    A Vision system inspection will transmit a result that matches one of the 90 entries, so we then have to try to match the value and determine which bin to sort the result to.
    We are learning a lot about LabView, but not fast enough.
    Thank you in advance for your help.
    Sincerely,
    Rich

    Rich
    It seems like you just want to create a histogram of the image. There is an IMAQ Histogram VI (Vision and Motion >> Image Processing >> Analysis). This allows you to define the max and min values along with how many "number of classes" (or bins). I believe this may accomplish what you are looking to do. As a note, there is also an IMAQ Histograph VI in the same pallet.
    Cameron T
    Applications Engineer
    National Instruments

  • How to define namespace with one XSD using in two different DB-Locations ?

    I'm not clear,How to define namespace, targetnamespace, schemaLocation or NoNamespaceSchemaLocation
    when using one common schema.xsd-definition
    in two different database-locations for exchange xml-documents?
    when insert xml-document I've got an error ORA-30937
    do you have an good exsample ?
    thanks
    Norbert
    schema :
    <xs:schema
    xmlns="http://sourcehost.com/namespace"
    xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:oraxdb="http://xmlns.oracle.com/xdb"
    targetNamespace="http://sourcehost.com/namespace"
    xml-document :
    xmlns="http://Sourcehost.com/namespace"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:SchemaLocation="http://sourcehost.com/namespace http://desthost:8080/sys/schemas/PUBLIC/ADRESSEN.XSD">
    SQL> insert into ADRESSEN values
    (XMLType
    (bfilename ( 'XMLIMPORT_DIR', 'adressen.xml')
    , 0 )
    --> error :
    ORA-30937: No Schemadefinition for "SchemaLocation" (Namespace "http://www.w3.org/2001/XMLSchema-instance") in übergeordnetem Knoten "/ADRESSEN"

    Norbert
    The schema location used to register the XML schema with XML DB does not need to be a valid URL for accessing the XML Schema
    I can use the same URL , for instance
    http://xmlns.example.scom/xsd/myXMLSchema.xsd
    on machines dbserver1, and dbserver2. The only reason some people choose to use the hostname / portnumber convention is that this makes it possible to validate the XML Instance documents against the copy of the XML schema stored in the XML DB repository using an external tool like XMLSpy without have to add entries to XMLSpy's OASIS catalog files.
    I am concerned about the message you are getting. That's not correct AFAICT but I'd need to see the complete XML Schema and Instance to be sure.

  • More than one TV with Vision box using house ring ...

    I wonder if anyone can help
    I have a BTvision box which is connected to one TV using HDMI and want to be able to watch the BTvision output on a different TV from time to time.
    I have tried streaming to another room using a sender but the signal breaks up/ is lost in the wall density.
    Following advice from BT Vision operator I have purchased a Powerline Netricity 85M twin pack adapter to link the proposed system via the ring main but the connection cables supplied are Ethernet. The only connection on these units is for Ethernet.
    The BTvision box has an available USB port. The Hitachi Plasma TV to which I wish to stream the BTvision has an available Scart terminal and HDMI terminal.
    Will the system work if I connect the BTvision box to the Powerline Netricity adapter using a USB to Ethernet cable (item 1) and the Hitachi TV to the second Powerline Netricity adapter using an Ethernet to Scart cable (item 2) or Ethernet to HDMI cable (item 3)?
    If so who can you supply the items 1 and 2, or items 1 and 3 as appropriate?
    Or does anyone know of a different way to connect up the system?

    sgbuckingham wrote:
    Yes I live in digital only area.
    So for the avoidance of doubt I need
    1. to buy a universal modulator
    2. connect scart output from BLACK BTVision box to the modulator
    3. connect aerial output from BLACK BTVision box to the modulator
    4. connect, using coaxial cable, the modulator to a splitter (receiving both BT Vision output via modulator and digital coaxial aerial input)*
    The modulator will pass all signals so there is no need for a splitter, it has an loop through connection.
    see http://www.maplin.co.uk/programmable-universal-modulator-33050
    5. connect splitter to the second TV coaxial aerial input socket using coaxial cable*
    *If 4 and 5 not possible would I have to connect using SCART - and if so -  i presume I could just hardwire by SCART direct from Black BTVision box to second TV using 15m SCART cable negating need for a modulator?
    15m of SCART is probably going to be better than using a modulator, which will degrade the signal a bit, as it will have to go via the analogue part of the TV, however I have never tried a SCART lead that long.
    There are some useful help pages here, for BT Broadband customers only, on my personal website.
    BT Broadband customers - help with broadband, WiFi, networking, e-mail and phones.

  • Is there an App or a way to allow audio for those with vision and hearing disablities?

    My child has disabilities with both vision and hearing.  Is there a way or an App that reads iBooks aloud to her?

    Do you know about 'Voiceover' on the ipad? I don't like it but it may help.
    http://support.apple.com/kb/HT4064

Maybe you are looking for