Vision Assistant Code Generator to maintain Calibration File Reference

In Vision Assistant 2014 SP1, the "Create LabVIEW VI" function does not create a calibration file path in the generated code, for the Image Calibration function.  Instead it reads in all of the calibration information and sets these to constants (or controls if selected it in the wizard) in the generated code.  It would be good if this maintained a reference to the template.

There is no reference to the calibration file and we calibrate "on the fly", when the calibration method is "Point Distance Calibration" (Simple Calibration), because there is no learning associated with this method and the time it takes to calibrate the image with this method is instantaneous.
If you choose any of the other methods, the code generated will point to the calibration file saved, and the code generated will apply the calibration from the template file to the current image, without relearning it, for optimization purposes.
This was a design decision, and unfortunately, there is no workaround at this time for the simple calibration case.

Similar Messages

  • Problem with advanced threshold in OCR - Vision Assistant 2013

    I'm facing a problem with Vision Assistant 2013
    The OCR charter set file advanced threshold data is always fixed at 255 upper value and optimize for speed is checked.
    I edit them and reopen the file but no change.
    anyone facing the same problem ?
    Attachments:
    Untitled.png ‏7 KB

    Hi Paolo,
    Thanks for your answer. Yeah I have seen the examples and I´m familiar with the use of the OCR VI. I have use it a couple of time already with good results.
    My problem came last week. While I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting.
    I decided to run the test on the vision assistant and it worked perfectly. I assumed my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image to verify.
    I did not change anything on the VI (all parameters are the same), and used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot!
    I´ll start a new thread as you recommended. Hope to find the solution soon. Thanks  
    Regards,
    Esteban

  • NI Vision Builder vs. LabVIEW RT + NI Vision Assistant

    Hello
    I’m designing a vision application which should be able to run stand-alone (EVS-1463RT). The app will have up to 30 inspection states (read gauge value, checking for the presence specific objects, etc.) approximately. The second requirements are the communication ability with other devices not NI via TCP/IP and logging pictures in FTP.
    Now I’m thinking about the two possible solutions.
           Create an AI with NI Vision Builder
           Create LabVIEW RT app and with using NI Vision Assistant build inspection states
    A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited. In the other hand the building the inspection is “easy”.
    The second solution has following advantages for me: better control of the app, maybe better possibility to optimize the code, implementation my own TCP/IP server. My biggest concern is the using NI Vision Assistant to generate the inspection states.
    In conclusion I have to say that I’m not experience in the vision app but the LV RT is no problem for me.
    Thanks for any opinions
    Jan

    Hi Jan,
    > A have to say that the first solution is not my favorite because I tried to implement the TCP/IP communication in NI Vision Builder and it was function but the using of the server is limited.
    Could you give more feedback on this point? What do you mean by "using of the server is limited". More precise feedback and suggestions would help us improve the functionality.
    What I would recommend you look into is the use of the VBAI API. You can find examples in <Program Files>\National Instruments\<Vision Builder AI>\API Examples\LabVIEW Examples
    This features allows you to run VBAI inspection within your LabVIEW application, an retrieve results that you can send using a TCP implementation of your own in LabVIEW, without having to use the VBAI TCP functionality.
    You retain the configuration feature of the Vision part of the application, and can add extra code in LabVIEW.
    The API functions allow to basically open an VBAI inspection file, run it synchronously or asynchonously, and retrieve images and results.
    As you mentioned, the other solution is to implement your state diagram in LabVIEW and use the Vision Assistant Express VI in different states. What VBAI give you that Vision Assistant doesn't is the pass/fail limits for each step (and the state diagram).
    Best regards,
    Christophe

  • Why do the results between the OCR in the Vision Assistant and OCR in a VI generated from the vision assistant might defer?

    Hi everybody,
    I had a problem while I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting on the image.
    I decided to run the test on the vision assistant and it worked perfectly: I got the expected results.#
    I assumed then that my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image, just to verify.I did not change anything on the VI (all parameters are the same), and I used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot! I have checked alll possible configurable parameters and they have the same values as in the Vision assistant. Does anybody have an idea about what could be the reason of this behaviour?  Thanks
    Regards,
    Esteban

    Esteban,
    do you use the same images for OCR? Or do you take a new snap shot before running the OCR?
    What steps of debugging did you use in LV?
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Why do the results between the OCR in the Vision Assistant and a VI generated from the vision assistant might defer?

    Hi everybody,
    I had a problem while I was trying to run OCR on my image in my LabVIEW code. The algorithm did not detected what I was expecting on the image.
    I decided to run the test on the vision assistant and it worked perfectly: I got the expected results.#
    I assumed then that my code had different parameters, so I decided to generate a VI from the vision asssistant and run it on LabVIEW for the same image, just to verify.I did not change anything on the VI (all parameters are the same), and I used the same image. Surprinsingly the results are different between the assistant and the VI. That strikes me a lot! I have checked alll possible configurable parameters and they have the same values as in the Vision assistant. Does anybody have an idea about what could be the reason of this behaviour?  Thanks
    Regards,
    Esteban

    Hi Peter, 
    Strange. It runs here. Attached the screeshot you requested.
    Regards,
    Esteban
    Attachments:
    Capture.png ‏135 KB

  • How do I edit the file created by code generator?

    hi all,
    I want to modify a java file that is generated when you create the webdynpro, but I can not change.
    // This file has been generated partially by the Web Dynpro Code Generator.
    // MODIFY CODE ONLY IN SECTIONS ENCLOSED BY @@begin AND @@end.
    // ALL OTHER CHANGES WILL BE LOST IF THE FILE IS REGENERATED.
    how can I disable the file to modify
    thanks!

    Hi Pablo.
    There's no way to modify it, NW2004s and its IDE has a code compilation based in some internal scripts based on eclipse plugin architecture.
    In your code you must consider this.
    1 - Try to find in your declared methods, importing directives, packages declarations  some like this:
    // This file has been generated partially by the Web Dynpro Code Generator.
    // MODIFY CODE ONLY IN SECTIONS ENCLOSED BY @@begin AND @@end.
    // ALL OTHER CHANGES WILL BE LOST IF THE FILE IS REGENERATED.
    package com.sap.teste;
    // IMPORTANT NOTE:
    // ALL IMPORT STATEMENTS MUST BE PLACED IN THE FOLLOWING SECTION ENCLOSED
    // BY @@begin imports AND @@end. FURTHERMORE, THIS SECTION MUST ALWAYS CONTAIN
    // AT LEAST ONE IMPORT STATEMENT (E.G. THAT FOR IPrivateTesteApp).
    // OTHERWISE, USING THE ECLIPSE FUNCTION "Organize Imports" FOLLOWED BY
    // A WEB DYNPRO CODE GENERATION (E.G. PROJECT BUILD) WILL RESULT IN THE LOSS
    // OF IMPORT STATEMENTS.
    //@@begin imports
    import com.sap.teste.wdp.IPrivateTesteApp;
    import com.sap.myimport.MyClass;
    //@@end
    //@@begin
    //@@end
    it means that between begin and end...you can add what you want..and the code will not be removed after class compilation....
      //@@begin javadoc:wdDoExit()
      /** Hook method called to clean up controller. */
      //@@end
      public void wdDoExit()
        //@@begin wdDoExit()
        //@@end
    int this case too....if you implement something between //@@begin and //@@end in the method scope...all part of your code will be surely safe.

  • Where are the xml files for code generator of MSAHH5?

    I'm generating source files for MSAHH5 and in the source code I can find xml files for generating all entities and DAL for those entities but I can't find anywhere files for MSATableConstants.xsl and MSAPersistanceManager.xsl and if I leave them alone they don't generate any class. I tried all xml in the code-generator folder without luck. What files should I use?
    Thanks in advance.

    Hi,
        I will answer your question in 2 parts.
    1. Which files are used to generate what?????????
    For entities:            Entity.xslt+ Entity.xml
    For MSATablesContants:   MSATableConstants.xslt+       
                             Entity.xml
    For MSAPackageManager:   MSAPackageManager.xslt+
                             Entity.xml
    For DAL Generation:      DAL.xslt+DAL.xml
    2. Where these will be generated?????????
    It depends on the path of the project that you are giving. If in the code generator tool when u make a new project give the path as the path of your project in eclipse.
    Otherwise if you give path as ./ means the current folder from where you r running classgenerator.jar
    you have to manually create folders like
    C:\classgenerator\com\sap\crm\handheld\db\dataaccess for DAL classes and similarly for entities etc.
    Hope this is clear
    anubhav

  • How to Determine File Code page of the txt file generated thru UTL_FIle

    Hi
    I want to know the file code page of the txt file that I have generated in Oracle using UTL_FILE.
    E.g., OUT_FILE := UTL_FILE.FOPEN(DIRNAME,vFILENAME,'W',2000);
    What is the way to do so? I have read on internet that if we want to know the code page of a file then we just have to open in any browser like IE and just check the encoding from there. Is this the correct way to know the encoding?
    Also, I want to know that whether file code page is dependent on OS or the source from which the file is being generated?

    "The Oracle character set is synonymous with the Windows code page. Basically, they mean the same thing."
    Source:http://www.databasejournal.com/features/oracle/article.php/3493691/The-Globalization-of-Language-in-Oracle---The-NLSLANG-variable.htm
    Means, in oracle we says character set which is = window's code page.
    "Determine the code page of your Oracle database ( = select parameter, value from nls_database_parameters and look for parameter NLS_CHARACTERSET)."
    Source:http://media.datadirect.com/download/docs/slnk/admin/intlangsupp.html
    Which already said by Pierre above.
    HTH
    Girish Sharma

  • Intégrer le code généré par NI vision assistant à mon programme codé sous LabWindows CVI

    Bonjour à toutes et à tous,
    Dans la cadre d'un projet je doit faire de la vision par ordinateur via le logiciel National Instrument (NI) "Vision Assistant". Les fonctions de l'assistant dont je veux me servir seront simplement "Histrogramme", "thershold" et "circle detection".
    Mon problème, c'est que j'aimerai intégrer véritablement, l'image donné en direct de ma camera, les outils (boutons) de sélections sur l'image et les résultats obtenus après traitement (que l'Assistant donne) à mon programme déjà existant (sur l'UIR du programme) !
    Je ne comprend pas (j'ai bien lu le tutoriel Vision assistant to cvi sans succès) comment utiliser le code généré par l'assistant dans CVI pour faire apparaitre ce dont j'ai besoin (l'image donnée par ma camera, les boutons de selections de l'assistant et dans mon programme déjà codé.
    De plus si je compile directement le code que l'assistant me genere (seul), il y a des erreurs liées a la bibliothèque <nimachinevision.h> : "redlacaration of variables..." bibliothèque que je doit donc enlever du code généré pour qu'il compile correctement !
    Si l'un d'entre vous est déjà passé par là, son aide me serait très précieuse !
    Merci beaucoup !

    Bonjour Pooty,
    Pourriez vous indiquer de quel tutorial vous-êtes vous inspiré ?
    Merci d'avance.
    Mathieu_T
    Certified LabVIEW Developer
    Certified TestStand Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    LabVIEW Tour
    Journées Techniques dans 10 villes en France, du 4 au 20 novembre 2014

  • USB camera "could not be initialized properly" in vision assistant, error code unknown

    Hi All,
    I was working on a project with Labview 2011, and VDM 2014 (vision developement module), and VAS 2014_August (vision aquisition software).
    I finished my project well including the use of vision assistant!
    but when the temporary licenses expired for the VDM and VAS, then I want to activate the purchased licenses for both of them, I should use the VAS 2014_februar not augest!!!
    then I uninstalled the 6 drivers included for the VAS_2014_August IMAQ & IMAQ Runtime, IMAQ I/O and IMAQ I/O Runtime, IMAQdx & IMAQdx Runtime, then installed the VAS_2014_02 February!
    but the vision assistant stopped working!
    as attached in the files, when I try to aquire an image from there, I get "The specified aquisition device could not be initialize properly"...
    and if I am trying to press on grab I get the same!
    I tried a lot of things like to remove drivers for the VAS_2014_08 then to install even the 2015 then to uninstall 2015 then to install again 2014 february...
    also in the NI MAX I can find the camera and to activate to grab and snap! but not in the vision assistant!
    what can you suggest for me as solution?!
    Thanks.
    Attachments:
    USB Camera 1.jpg ‏140 KB
    usb camera 2.png ‏167 KB

    For the past several years, LabVIEW was released in August, with a Service Pack (SP1) released in February.  Thus LabVIEW 2014 was released in August 2014, LabVIEW 2013 was released in August 2013, and LabVIEW 2013 SP1 was released in February 2014.
    You cannot depend on the year to know the LabVIEW Version!!
    Also, LabVIEW is not really backward-compatible.  If you are doing serious Development work, you want LabVIEW and all of the relevant Toolkits and Modules to be the same Release (e.g. 2011, 2013, or 2014 -- don't "mix and match").  If you have NI Update installed, it should help you "migrate" to Service Pack 1, if necessary.
    BS

  • So I want to use an imaging device with Vision Assistant then incorporate that into my LabView 8.6 code. How can I do this?

    So here's what I have so far written, I have LabView program to interface with my microscope stage, autofocus. I want to try now to interface the camera using Vision Assistant.
    The procedure I want the program to perform is taking this CR-39 detectors and placing them on this stage and the camera to take images of this detector frame by frame. So I want the program to go to the very top corner of the detector on the stage. Then the camera would autofocus (if needed) and take an image. So this would take an image for every micron of the detector on that row. Then move to the next  and so on. Until it has successfully imaged the whole detector.  Is anyone well versed on Vision Assistant and willing to give some advice on this? It would be greatly appreciated.
    The purpose is eliminate the previous method of acquiring images by manually focusing on the tracks created by the particles. I want to this program to run itself without a user being present, so just pressing a button would send the program into autopilot and it would perform the procedure.
    What are CR-39 detectors?
    Allyl Diglycol Carbonate (ADC) is a plastic polymer. When ionizing radiation passes through the CR-39 it breaks the chemical bonds of the polymer and it creates a latent damage trail along the trajectory of the particles. These trails are further ethced using a chemical solution to make the tracts larger which can finally be seeing using an optical microscope.
    Thank you,
    HumanCondition

    Hi Justin,
    Yes I already have a vi that controls the stage.
    So the camera is stationary above the stage. Now what I want to do is get the camera to take the pictures of the detector by moving the stage frame by frame for example starting from the top right corner of the detector (by taking an image) then ending on the top left corner of the detector (by taking an image) then having the stage move down a row and starting the process over again until each frame (or piece) has been imaged.
    My goal is while the stage is moving to every little frame of the detector I would like for it to autofocus if necessary, and take a image to be for each frame of the detector.
    Using the auto focus requires the moving of the stage up and down.
    HumanCondition

  • NI Vision Assistant - "Create LabVIEW VI" produces a blank VI

    Hello, I'm using:
    - LabVIEW 2011 SP1 v11.0.1 32bit
    - Vision Assistant 2011 SP1 Build 20111212112331
    When I create a simple script in Vision Assistant and click on "Create LabVIEW VI", the VI file is created, but when I open it, I can only see a blank VI.
    I really hope someone could help me, I don't know how to solve this problem.
    If you need more informations, I'll provide them.
    Thank you!

    Hi Claudio, I added the following Express VI in a blank block diagram, like you suggested:
    "Prompt User for Input"
    "Vision Acquisition"
    No error messages appeared after having placed them in the block diagram, or after their configuration.
    However, if I place a "Vision Assistant" Express VI, I still receive the following message error after having configured it:
    "There was an error in the process of creating the Express VI's code"
    Moreover, I tried to repair the following modules, in sequence:
    NI Vision Common Resources 2011 SP1 f1
    NI Vision Assistant 2011 SP1
    NI Vision 2011 SP1
    For each of the 3 reparations, I had to reboot the PC. However, the problem hasn't disappeared, I'm still unable to generate a VI by using Vision Assistant.
    I really hope to solve this annoying problem, because the day of my exam is getting closer... so suggestions are more than appreciated, at this point I don't know how I could fix that.
    In attachment you can find a simple Vision Assistant script, and the blank VI produced. Please note that Vision Assistant always produces a blank VI, no matter of the script, like I stated earlier.
    A final note, I tried to generate C and .NET code for the script in attachment while using Vision Assistant, and it seemed to work (no message errors or blank C/.NET files produced), so this is a problem related to LabVIEW code generation only.
    Attachments:
    Simple_script_but_still_not_able_to_produce_a_VI.vascr ‏2 KB
    Blank_VI_produced_by_Vision_Assistant.vi ‏5 KB

  • LabVIEW Vision Assistant的图像处理,NI视觉助手教程免费阅读

     附件里面比较全,因为发不了那么多字
    http://shixinhua.com/  石鑫华主页
    http://shixinhua.com/imganalyse/2013/09/762.html  全文网站
    《基于Vision Assistant的图像处理是实用教程》
    第二章 界面与菜单       67
    第一节 启动欢迎界面  67
    http://shixinhua.com/imganalyse/2013/01/278.html
    第二节 功能界面  73
    http://shixinhua.com/imganalyse/2013/01/280.html
    Acquire Images采集图像界面       73
    Browse Images浏览图像界面       77
    Process Images处理图像界面       79
    第三节 File文件菜单    86
    http://shixinhua.com/imganalyse/2013/01/283.html
    Open Image:打开图像          86
    Open AVI File:打开AVI文件         87
    Save Image:保存图像  87
    New Script:新建脚本   91
    Open Script:打开脚本  92
    Save Script:保存脚本   92
    Save Script As:脚本另存为  93
    Acquire Image:采集图像     93
    Browse Images:浏览图像   93
    Process Images:处理图像   94
    Print Image:打印图像 94
    Preferences:优先参数选择 95
    Exit:退出       97
    第四节 Edit编辑菜单   97
    http://shixinhua.com/imganalyse/2013/01/284.html
    Edit Step:编辑步骤       97
    Cut:剪切       97
    Copy:复制     98
    Paste:粘贴   98
    Delete:删除 98
    第五节 View查看菜单 98
    http://shixinhua.com/imganalyse/2013/01/286.html
    Zoom In:放大        99
    Zoom Out:缩小     99
    Zoom 1:1:原始图像      99
    Zoom to Fit:适合窗口  99
    第六节 Image图像菜单        99
    http://shixinhua.com/imganalyse/2013/01/288.html
    Histogram:直方图 101
    Line Profile:线剖面图   101
    Measure:测量      101
    3D View:三维视图        101
    Brightness:亮度   102
    Set Coordinate System:设置坐标系    102
    Image Mask:图像掩码         102
    Geometry:几何    102
    Image Buffer:图像缓存        102
    Get Image:获取图像   102
    Image Calibration:图像标定        102
    Image Correction:图像修正         102
    Overlay:覆盖         102
    Run LabVIEW VI:运行LabVIEW VI       103
    第七节 Color彩色菜单 103
    http://shixinhua.com/imganalyse/2013/01/289.html
    Color Operators:彩色运算  104
    Color Plane Extraction:彩色平面抽取          104
    Color Threshold:彩色阈值   104
    Color Classification:彩色分类      105
    Color Segmentation:彩色分段     105
    Color Matching:彩色匹配    105
    Color Location:彩色定位      105
    Color Pattern Matching:彩色模式匹配       105
    第八节 Grayscale灰度菜单 105
    http://shixinhua.com/imganalyse/2013/01/291.html
    Lookup Table:查找表    107
    Filters:滤波  107
    Gray Morphology:灰度形态学     107
    Gray Morphological Reconstruction:灰度形态学重建        107
    FFT Filter:快速傅立叶变换滤波  107
    Threshold:阈值     107
    Watershed Segmentation:分水岭分割        107
    Operators:运算    107
    Conversion:转换  107
    Quantify:量化       108
    Centroid:质心       108
    Detect Texture Defects:检查纹理缺陷       108
    第九节 Binary二值菜单        108
    http://shixinhua.com/imganalyse/2013/01/293.html
    Basic Morphology:基本形态学    109
    Adv. Morphology:高级形态学      109
    Binary Morphological Reconstruction:二值形态学重建     109
    Particle Filter:粒子滤波        109
    Binary Image Inversion:二值图像反转        110
    Particle Analysis:粒子分析  110
    Shape Matching:形状匹配  110
    Circle Detection:圆检测       110
    第十节 Machine Vision机器视觉菜单 110
    http://shixinhua.com/imganalyse/2013/01/295.html
    Edge Detector:边缘检测     111
    Find Straight Edge:查找直边        112
    Adv. Straight Edge:高级直边        112
    Find Circular Edge:查找圆边        112
    Max Clamp:最大卡尺  112
    Clamp(Rake):卡尺(耙子)         112
    Pattern Matching:模式匹配         112
    Geometric Matching:几何匹配   112
    Contour Analysis:轮廓分析 112
    Shape Detection:形状检测  112
    Golden Template Comparison:极品模板比较     113
    Caliper:测径器、卡尺 113
    第十一节 Identification识别菜单         113
    http://shixinhua.com/imganalyse/2013/01/296.html
    OCR/OCV:光学字符识别     114
    Particle Classification:粒子分类  114
    Barcode Reader:条码读取  114
    2D Barcode Reader:二维条码读取     114
    第十二节 Tools工具菜单      114
    http://shixinhua.com/imganalyse/2013/02/297.html
    Batch Processing:批量处理          114
    Performance Meter:性能测量     120
    View Measurements:查看测量   120
    Create LabVIEW VI:创建LabVIEW VI代码 121
    Create C Code:创建C代码 121
    Create .NET Code:创建.NET代码        122
    Activate Vision Assistant:激活视觉助手     123
    第十三节 Help帮助菜单       124
    http://shixinhua.com/imganalyse/2013/02/298.html
    Show Context Help:显示上下文帮助  124
    Online Help:在线帮助  125
    Solution Wizard:解决问题向导   125
    Patents:专利         125
    About Vision Assistant:关于视觉助手         125
    第三章 采集图像  126
    第一节 Acquire Image采集图像  127
    http://shixinhua.com/imganalyse/2013/02/299.html
    第二节 Acquire Image(1394,GigE,or USB)采集图像(1394、千兆网、USB) 128
    http://shixinhua.com/imganalyse/2013/02/301.html
    Main选项卡   129
    Attributes属性选项卡   138
    第三节 Acquire Image(Smart Camera)从智能相机中采集图像 143
    http://shixinhua.com/imganalyse/2013/02/302.html
    第四节 Simulate Acquisition仿真采集 145
    http://shixinhua.com/imganalyse/2013/02/304.html
    第四章 浏览图像  150

    Wetzer wrote:
    You can think of the purple "Image" Wire as a Pointer to the Buffer containing the actual Image data.
    That's what I thought too, but why would it behave differently during execution highlighting? (sorry, I don't have IMAQ, just curious )
    LabVIEW Champion . Do more with less code and in less time .

  • Customise Generated File "Reference.cs"

    We have the following setup. A VB6 application calls a .NET COM component
    that acts mainly as a forwarding system calling methods on a web service.
    When you open up the VB6 object browser and look at the methods exposed by
    the .NET COM Component, you will also see the public methods exposed by the
    WebService. I do not want the VB6 application to know about the WebService.
    If I were to define the WebService public methods as ComVisible(false), then
    they would not be exposed to the .NET COM Component. I can however, add this
    attribute to the Web References\Reference.cs file present in the .NET COM
    Project.
    Is there a way to have the calling of "Update Web Reference" and "Add Web
    Reference" to automatically add the ComVisible(false) attribute to each class
    in the Reference.cs file generated?

    There is no reference to the calibration file and we calibrate "on the fly", when the calibration method is "Point Distance Calibration" (Simple Calibration), because there is no learning associated with this method and the time it takes to calibrate the image with this method is instantaneous.
    If you choose any of the other methods, the code generated will point to the calibration file saved, and the code generated will apply the calibration from the template file to the current image, without relearning it, for optimization purposes.
    This was a design decision, and unfortunately, there is no workaround at this time for the simple calibration case.

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

Maybe you are looking for

  • Will my mid 2010  macbook pro work with a dell monitor

    I need to buy an external display for my MacBook Pro (mid 2010).  I'm considering a dell UltraSharp U2412M 24-inch.  Will this be compatitble?  I tried a ViewSonic and couldn't get my Mac to play nice re: resolution.  Will I have any better luck with

  • Shell script for export and import

    Hi, I want to run exp command in background since i need to export 40gb database of other database if i won't use & my session will die. appreciated any inputs. i need to run this line from shell script. /oracle/bin/exp pin@voipdbstg/pin file=voip.dm

  • I-Mac won't read photos on CD-RW

    I imported a lot of pictures ino the I-Photo program on my G-4, and burned onto CD-RW disks. Then I bought an I-Mac 20" with Intel Core Duo, which now runs OS X 10.4.9. Problem: the I-Photo program won't recognize the CDs! Since I also upgraded the G

  • DX735 system restore to in-box. Won't install Windows Touch.

    Restored my system, Windows 7, right back as it was in the box.  Now I cannot download the Windows Touch Screen package.  It ran before I reformatted.  Now I get this message. To install Microsoft Touch Pack for Windows 7 you need a computer with a t

  • Encore Ejects Disc on Build

    "Invalid MPEG"  on MPEG2-DVD Before I waste more discs, do you see anything wrong with this wrapper please? Example of main feature Format                       : MPEG-PS File size                    : 1.87 GiB Duration                     : 45mn 58s