Stereo Vision Calibration

Hello Everyone...
Can any one plese explain the GRID Calibration procedure for a Stero Vision Application in LabVIEW along with the following queries realted to the same.
1) Would Grid Calibration will provide the 3D Position information of a Object?
2) Is it possible to  prepare a test Srereo setup for Stereo Vision using USB Cameras(WEB Cams)?
3) How to confind the Field of View of the Camera to a certain Area of  e.g 100x100x100 mm ?
Thanks

Hello,
perhaps the following links can help you a bit:
https://decibel.ni.com/content/blogs/kl3m3n/2014/07/24/stereovision-in-labview-based-on-correspondin...
https://decibel.ni.com/content/blogs/kl3m3n/2013/07/26/stereo-setup-parameters-calculation-and-labvi...
Best regards,
K
https://decibel.ni.com/content/blogs/kl3m3n
"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

Similar Messages

  • Stereo vision calibration failed

    Hi guys,
    I'm using the Calibrate stereo vision.vi (NI Vision development module 2013) to calibrate my stereo vision system but I failed and I don't know why. My experimental setup is as follows:
    2 aviator 1600gc cameras, with 16mm focal lenses and a baseline of 1 cm;
    a calibration grid of 16*16 dots (dx=dy=4 cm), at a distance of 2 meters;
    2 light projector of 500 W.
    I followed the VI instructions and made a lot of trials for 2 days, but i never succedeed in calibration. More specific, in the grid coverage control i always obtain NaN %, as showed in the picture in the attachment.
    In continued to acquire images as long as all gride coverage control below are all to 100% but the VI doesn't proceed into grid vertex calibration.
    Can someone help me?
    Thanks in advance

    Hi there,
    I got exactly the same problem! I hope it's okay posting in this topic.
    I followed the instructions on http://zone.ni.com/reference/en-XX/help/370281U-01/imaqvision/stereo_vision_example/ but my top grid coverage just won't show any values.
    The grid is from the NI folder, so actually it should  be okay.
    cams: 2x DMK 23F618 from ImagingSource
    lenses: 2x 8mm F1.2
    Furthermore, at most attempts I won't even get any values at all.
    Even if all points are detected it won't show any values. This mostly happens when I use external light sources (tried it with infrared and a common 20W halogen lamp running on DC)
    I hope you can help me out here! Thanks

  • Stereo Vision and Projected Light

    Hello!
    I'm looking for examples for the technologies mentioned in the subject. Anyone has dealt with these? If yes, can he share an example about them?
    I have found an example for Stereo Vision in Vision Development Module's help, but I'm looking for other ones, and so far no success finding any.
    I would be helped out with a Light Projection example aswell.
    Thank you!
    Solved!
    Go to Solution.

    Hello,
    take a look at the following discussion:
    http://forums.ni.com/t5/Machine-Vision/Stereo-library-2012-pointers/m-p/2171812/highlight/true#M3672...
    Also, I am attaching a set of VI's (LV2012) that I used some time ago to obtain depth image from stereo image pair. They consist of three different stages:
    1. Images acquisition,
    2. Stereo calibration,
    3. Measurements.
    When there is enough (distinct) texture on the measured surface, the projector is not neccessary. But in low texture setups, it can greatly enhance the quality of the measured data. If the projector is used, take care that the pattern is as random as possible. This should give the correlation algorithm a good basis. Take a look at the following paper:
    http://www.aurelien.plyer.fr/wp-content/uploads/2012/04/ptext.pdf
    About "Light Projection" - if you are referring to structured lighting techniques take a look at the following page:
    http://mesh.brown.edu/byo3d/index.html
    Hope this helps you in some way.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
    Attachments:
    stereo.zip ‏229 KB

  • Where is original point of Z axis in a stereo vision system?

    Hello, everyone
    I use LabVIEW Stereo Vision module. After I calibrate my stereo vision system, I want to verify the accuracy of my system. I can get every point`s depth in my picture, but where is my original point of Z axis? 
    In the newest NI Vision Concepts Help, it is written that  NI Vision renders 3D information with respect to the left rectified image such that the new optical center will be at (0, 0, Z) position. Is the original point of Z axis is on the CCD of left camera or the optical center of my left camera`s lens?
    So anyone can help me ?
    CLAD
    CAU
    Solved!
    Go to Solution.
    Attachments:
    未命名.jpg ‏63 KB

    Hello,
    I would say that the coordinate system origin is at the optical centre, that is the camera's projection centre.
    So, yes, the optical centre of the left camera's lense. This seems most logical to me...
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Stereo Vision and Light Projection (or structured light)

    Hello!
    I'm looking for examples for the technologies mentioned in the subject. Anyone has dealt with these? If yes, can he share an example about them?
    I have found an example for Stereo Vision in Vision Development Module's help, but I'm looking for other ones, and so far no success finding any.
    I would be helped out with a Light Projection example aswell.
    Thank you!

    Hi gbbalint,
    Please do not post duplicate posts. You can find the second post on http://forums.ni.com/t5/Machine-Vision/Stereo-Vision-and-Projected-Light/td-p/2481264 .
    Martin

  • Stereo vision using commercial DSLR

    Hi. I'm a Phd student in CE. 
    I'm going to build stereographic equipment setup using two comercial dslr cameras and labveiw.
    The specific application is accurate target position estimation using stereoscopic (stereo vision system).
    I know there are SDK package in Nikon and Canon for controling camera with computer. However, they don't provide multi-camera control solution so it is hard to construct it. Would it be possible to control cameras using Labview? Here, "control" means the camera control functions (such as live-viewing, zoom in/out, etc) , which are provided by commercial camera tethering software like 'Nikon Camera Control Pro 2'.
    I heard that HYTEK (http://www.hytekautomation.ca/ ) made Laview VI's for Nikon cameras but I'm not sure it works for my setup or suitable for stereo vision. I hope use the software that NI provides. 
    Does anyone know about this topic? All suggestions/comments are welcom

    dear chulminy
    you can use usb camera or gige vision camera, much chiper than DSLR, and LabVIEW support those camera, you can install Vision Acquisition Software, then the camera can be config in max, and used in LabVIEW.
    CLAD
    CAU

  • Stereo vision using 2 sync'd ps3 eye

    Hello too the whole forum!
    I'm new to archlinux and i've been trying to make use of a synchronized pair of ps3 eye webcams. The hardware has been confirmed to be synchronized but i've been unable to make that usefull on the software side. I'm using OpenCV libraries, programming in python and have tried, unsuccessfully, to use the mkoval solution ( https://github.com/mkoval/stereo_webcam ).
    Has any of you had any experience with it? If so, what have you done to make it work?
    Nice to find such an active comunity here!
    Thank you and have a nice weekend!
    Last edited by scherzando (2012-09-29 17:33:25)

    You can do this under the same computer login account or with another computer login account.
    Doing this under the same computer login account will require having to log in and out of each iTunes library depending on which one you want to use for syncing, etc.
    Doing this under another computer login account will require having to log in and out of each computer login account depending on which one you want to use for syncing, etc, and with OS X there is fast user switching making the process of switcing between multiple computer login account fast and easy. One way is probably not faster than the other.
    How about making use of iTunes playlists with one iTunes library? There is no rule that your entire iTunes library must be transferred to an iPod or iOS device.

  • Cannot get repeatable stereo calibration

    Hello all. I am struggling to get a repeatable stereo caibration. Hopefully someone can give me some pointers. A little bit about my setup:
    I have a pair of AVT Manta GigE cameras (1292 x 964) paired with Tamron 23FM25SP lenses. The cameras are mounted on a rigid (12mm thick) aluminium plate and are currently set to be around 895mm apart.The cameras are toed in slightly so that the centres of the images intersect around 4 metres from the cameras. The cameras are securely mounted via adapter plates and bolts. They cannot move.
    I have a calibration grid along the lines of the NI example grid. Mine is 28X20 black dots spaced around 13mm apart (centre to cente) with each dot being around 5mm diameter. I am aware of the  NI guidelines on suitable calibration grids, and mine seems to be well within the recommended bounds. The grid was formed by laser printing onto A3 paper and then using spray adhesive to fis to a rigid carbon fibre panel. It is flat and doesn't deform when in use.
    So, here is my problem: when I use the calibration grid to calibrate the cameras I sometimes get a good calibration and sometimes not. When I get a good calibration and attempt to repeat exactly the same process I get a different result. What do I mean by a good calibration? When I go on to use the stereo calibration in my system which tracks a circular feature in 3D space I get good accurate measurements (well sub-mm in the cross camera axes and ~1mm depth resolution, over a ange of 600mm in each axis centred around 3000mm from the cameras. The centres of the circular features in each image lie on the same horizontal image line as expected in the rectified images for a well calibrated camera pair. When I get this 'good' calibration the distance between the cameras as returned by the 'IMAQ get binocular stereo calibration info 2' VI (the magniture of the translation vector) is around the correct distance of 895mm. However, when I perform the calibration lots of times I get quite a spread of camera separations (up to 20mm either side of correct). When I get a significant error in the camera separation the accuracy of the system degrades dramatically and the centres of the circular feature line on progressively further apart horizontal lines (there's one distance from the camera when they're on the same line, and they move apart either side of that distance).
    I have gathered a set of 10 images of the calibration target and set up a VI to use a subset of the images for the calibration process and iterate through permutations to investigate the repeatabilty. I get a similar spread of results for inter-camera distance. 
    Does anyone have a feel for whether what I'm trying to do is sensible / achievable? Any tips for repeatable calibration? For instance, should the calibration grid be at a constant distance from the cameras when it is presented at the different anglres, or should a range of distances be used? If it should be the same distance, how accurately should this distnace be maintained?
    Thanks, Chris
    Regards,
    Chris Vann
    Certified LabVIEW Developer

    Hi Christophe. Thanks for taking an interest. I am pretty sure that structured light is not relevant to the calibration stage being discussed here. Structured light is a useful technique for introducing detail to otherwise bland areas of an image to provide feature matching algorithms something to match against, but I don't see how it's relevant to calibrating against a grid of dots. Happy if someone can correct me of course...
    I have been using the NI example "Stereo Vision Example.vi" located in C:\Program Files (x86)\National Instruments\LabVIEW 2012\examples\Vision\3. Applications, and have also created my own system based upon that example. I get the same poor results with both. The path and file you suggested is not present on my machine (I'm running LV2012). Is the example you suggested the same? Maybe I should be trying it. Any ideas where I can get hold of it?
    I have been using the techniques you suggest of presenting the calibration grid at a variety of angles and ensuring good coverage of the fields of view. I have spent upwards of 20 hours experimenting with different techniques and approaches, and cannot get repeatable results. But with the Matlab-based approach from Caltech using the same techniques I get good results. I am becoming increasingly confident there is an issue in the LV implementation.
    Thanks,
    Chris
    Regards,
    Chris Vann
    Certified LabVIEW Developer

  • Problems with camera calibration

    Hi!
    I am trying to create a project that realises stereo vision with two cameras using the VIs provided by LabVIEW. During development I ran into problems with the calibration of my camera(s).
    My questions are:
    I would like to calibrate the camera-images with the IMAQ Learn Camera Model VI. For this purpose I have to show the camera a calibrational image (containing a group of points) in different angles. However even after multiple shots the value of the Insufficient Data part of the Internal Parameters output remains TRUE. I also experienced the same thing while running the Stereo Vision Example.vi. I held the calibrational image according to the directions in both cases. I would like to know what does this VI need for finishing succesful computations? Is the implementation of other VIs is necessary (apart from IMAQ  Calibration Target To Points - Circular Dots VI)?
    Another question: If I give a TRUE value to the Add Points And Learn input, will the VI use every previously given data for computation or it will use only the currently received values until I change it back to FALSE to start accumulating data after that?
    I generated the data to the Reference Points input with the IMAQ Calibration Target To Points - Circular Dots VI. Are there other VIs with which I should use the Learn Camera Model VI?
    Apart from the things listed above a general description of this VI (and the other calibrational VIs) would be very useful and maybe a simple example program that illustrates the function and work of the Learn Camera Model VI.

    Hi nagy.peter.2060!
    I found just a few hints and example:
    Learn Camera Model
    This VI has 2 images as input, Calibration Template Image and Grid Image
    When First Image comes as input, we have to wire that image to Calibration Template Image.
    We will only learn Grid of Calibration Template Image if and only if no Camera model is attached to the image.
    If no camera model is attached, we have to learn Grid and calculate camera model and attach calInfo to Calibration Template Image
    If Calibration Template image and grid image both are present, we have to learn grid of Grid Image and attach calInfo to Calibration Template Image.
    I searched for examples related to camera calibration.
    The example number 1666 illustrates how to use a calibration grid, either with a live acquisition or from a file, to correct for perspective distortion. The example documents the process for calibration as you learn a grid, apply the calibration information to your image, and then either correct the entire image or convert individual pixels to real-world distances. Link: http://ftp.ni.com/pub/devzone/epd/1666.zip
    I found an other example without description, but it uses Learn Distortion Model VI. Please find the image of the VI attached, I hope it will be useful.
    Balazs Nagy
    Attachments:
    vision.png ‏9 KB

  • Point registration using cpd for camera calibration

    Hi,
    I do not know whether or not I could use Point Set Registration: Coherent Point Drift (CPD) for correcting the error of my 3d positioning using an overhead camera. 
     I have already done the intrinstic and extrinsic camera calibration but I still have errors between the measuremed position of my tracking object and it's true location. ( I track a board with 4 blobs which should give me the x,y,z of the tracking board)
    I would like to find the mapping between my measurements and true positions for those measurements which I can then use for new points to decrease the positioning error.
    I have been playing around in matlab with the cpd code here https://sites.google.com/site/myronenko/research/cpd
    I can find the transform but I dont know how to use it for new data points. It seems like the transformation matrice is a set of weights for each data point that it has been given. I added new points to the Y set and I get a new position for those points which seems to be ok but I really have no idea how the method deals with the unseen points or how the transformation can be used for your new data.
    Thanks a lot for your help

    Hi zeinab.t,
    I am not quite sure what you exactly want to know.
    From your explanations, I think to root issue is a not correct calibration of the vision system. To go a bit deeper:
    Calibration is used most frequently for stereo vision systems to calibrate two camera to get a correct 3D image in the acquisition. You wrote, that you are using only one camera. So how do you calibrate the camera? Which functions do you use? How is your vision application setup? A small draft may be helpful. Waht software do you use?
    About the tool you used in the link, I can not say anything, since I do not know, how this works in the background.
    I think we should keep our eyes to the root cause problem, so you do not need any transformation things that could potentially bring errors into the measurements.
    Best regards,
    Melanie

  • Stereo library 2012 pointers

    Dear programmers,
    I am trying to evaluate the new Stereo Vision Library and since I am fairly new at this, I would greatly appreciate an advice or two.
    My (simple) setup is:
    - Two DSLR cameras (different models, Cannon 50D and 350D)
    - Different lenses, but the focal lengths were the manually set to 35 mm
    - Same lighting (outdoor)
    - Baseline distance ~200 mm
    I calibrated single cameras using a camera model and also corrected perspective distortion. I used a grid shown above in 20 different postitions. Here, I had no problem.
    A static scene was considered, and the cameras were set on 10 s timer.
    Moving on, I used the stereo library to calibrate the stereo system and saved the stereo calibration data.
     I used the semi-global block matching alghoritm to find the disparity image. Based on the equation
    where f = 35 mm and b = 200 mm.
    I calculated the minimum disparity and number of disparities for my working distance (approx. 500-3000 mm) -> min disp = 0, number of disp. = 16.
    Using a small window size 7x7, I calculated the disparity for the following acquired images (an example of acquired image pair):
                                                 Left image                                                                                                      Right image
    The disparity image i get (non-interpolated) is:
    The depth image isn't calculated (blank image) for some reason.
    What am I doing wrong? I know I made some errors by my simplified system (the cameras are not fixed, the optics is not the same etc.), but what is wrong with the resulting disparity?
    Is the problem in the  vertical misalignment of the images? From what I can gather, the biggest problem is the texture matching. I expected more segmented images (more areas of equal depth). Is there not enough texture to match both images?
    If i try the example which is shipped with the Vision Module (Stereo Vision.vi) it works ok.
    Please help with an explanation. And if I made a mistake, please be patient with me.
    Thank you and best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
    Solved!
    Go to Solution.

    Ohh.. I forgot to mention that I reduced image sizes to 600x400. My fault. Also, I made them gray-scale. I am passing your images (rescaled and gray-scaled) in the attachement (Grid as well as object images, again for this set also make left as right and vice-versa). Use them. For calibration, I did the same thing (rescaled and gray-scaled the grid images). Finally, I used the Calibration Interface to calibrate the images. I passed the stero calibration file to you in the last response; use that along with the images I have sent. You will then have to change the number of disparities and see whether you can get the disparity and depth properly or not. I have tried here but due to different sensors, responses are different. So, while it gives something; it is not too clear. For example, for image-set 18, I am getting depth of the table starting from 57 inches to ending at 76 inches, if I take distance between two dots in your grid as 1 inch. I would recommend the following things for your future endeavours:
    1). If possible use the similar sensors (similar cameras).
    2). Capture smaller images (600x400) for your convenience.
    3). You do not require 20 grid images. 6-7 grid images with sufficient translation and angle coverage should do.
    4). Work in monochrome mode.
    5). Use textured images (or use patterned lighting, if possible).
    While it is slightly involved to setup stereo, it is very easy thereafter. So, bear with the process for now. Typically, for an informed user (having already set the system in past), it takes 15-20 minutes to set-up, capture images and calibrate.
    Regards,
    Ronak.
    Attachments:
    Images.zip ‏4403 KB

  • Nvidia 3D Vision Discover (Red/Cyan) Anaglyph on Thinkpad W530

    Is the Thinkpad W530 capable of displaying anaglyph 3d?
    According to computer test in the nvidia website, the W530 is capable http://systemrequirementslab.com/cyri_if/1066/0/8174
    Giving the result below, but I have not been able to find the setup option in the Nvidia control Panel on windows 7 64-bit

    I have "Enable 3D vision" and "Disable 3D vision" in my start menu, and under "Manage 3D settings" there are settings for Stereo Vision, but attempting to run the 3D preview results in an error stating "the primary display adapter does not support 3D vision"
    Looks like it could also be the optimus
    http://3dvision-blog.com/5532-nvidia-optimus-technology-and-3d-vision-dont-go-well-together/
    but that has nothing to do with anaglyph. either way. verde drivers may be of help, once they are released.
    W530(2436-CTO): i7-3720QM, nVidia Quadro K2000M,16GB RAM, 500 GB hard drive, 128GB mSATA SSD, Ubuntu 14.04 Gnome, Centrino Ultimate-N 6300.
    Yoga 3 Pro: Intel Core-M 5Y70, Intel HD 5300, 8GB RAM, 128GB eMMC, Windows 8.1, Broadcom Wireless 802.11ac.

  • How to get 3D coordinates of reflective markers using two cameras?

    Hi,
    I am very new to LabVIEW (in fact to any coding at all) and helping my adviser to get the 3D coordinates of a few reflective markers using two cameras. I am able to read the marker coordinates (x, y) from two cameras simultaneously by processing the data in real-time using codes generated from vision assistant. However, we want to get the depth position by triangulating the markers. I have seen stereo vision doing something similar to this, but I think the stereo vision may not work with our calibration frame (markers) and we don’t need the whole depth image, but only the maker’s z coordinates. I also want to use Region of Interest to mask out other regions that are creating reflections. However, I am not sure if triangulation would work if we select region of interest (as the origin of the camera coordinates would change after selecting ROI). I saw this link http://kwon3d.com/theory/dlt/dlt.html#3d where they used DLT (direct linear transformation) method, but it is too much to code from the beginning. Is there a subVI in LabVIEW or some sort of prewritten code that can be customized? Can anyone please give me some advice on how to solve this problem?

    Well in theory, if you know exactly where the cameras are pointed, how far apart they are, and how far the reflector images are above or below the horizon and to the right or left of center line, a little simple math should give you the answer. Concerning the ROI I would think all you needed to know was where the ROI was relative to horizon and centerline. You could then calculate an absolute position from there, which would also give you the angles you would need.
    Unfortunately, I don't know of any readily availble code. But I'm sure there is some! With the emphasis on FIRST robotics, I got to believe that judging distances in 3D space is something for which there is a lot of code.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Secret dragging tricks and keyboard shortcuts - are these published anywhere?

    I have searched for some and stumbled upon others. But I have not seen any of these published or posted anywhere. If so, can you tell me? Either way, you may find some useful ones here. Enjoy! (I think most of these work in Lion. Any keyboard shortcuts you know for Time Machine would be most welcome.)
    *** THE DOCK ***
    Press ^F3 to move to the dock. Use the arrow keys to select an icon. You can also use autocomplete (type the first few letters of the icon's name). Then press Opt-arrow_key to move the *icon* among the other icons on the Dock. Thus you can change the order of the icons on the Dock using only the keyboard.
    *** DRAGGING TRICKS ***
    o SAVE-AS SHEET TRICK
    Open an app's Save As sheet. Drag a folder or its proxy icon to anywhere in the Save As sheet except the sidebar and that updates the directory (folder) field with that folder. Drag a file or its proxy icon to the Save As sheet (anywhere except the sidebar) and that updates both the directory (folder) and file fields accordingly.
    Why do I do this? I find it easier to use the actual Finder, or I may already have a Finder window in the right directory.
    You can also click on a directory or file in the Save As sheet to populate the directory or file fields, respectively. Then edit the filename to a slightly different name.
    o COPY A FOLDER TREE TO A TEXT DOCUMENT
    Open a true text-only editor session using an app like TextWrangler (not TextEdit!). Drag a folder (or its proxy icon) from the Finder into the editor window. Viola`! The names of the folder and all its subfolders and files are pasted into the text buffer in a properly indented form.
    o COPY A FILE TO A TEXT DOCUMENT
    Drag a file from the Finder into a text-only editor like TextWrangler or similar (not TextEdit!) and the _contents_ of the file (not its icon) are inserted into the document.
    o MAKE A NEW FILE THAT CONTAINS SELECTED TEXT
    Select some text. Drag it to a folder. This creates a "textClipping" file which contains the selected text. Opens as a "Finder document".
    o COPY A PHOTO FROM IPHOTO TO A FOLDER
    Open iPhoto. Drag (to copy) a photo to a folder. Makes a jpeg file of the photo in that folder.
    *** KEYBOARD TRICKS ***
    o MENUS AND LISTS
    Opt-U/D: Move highlight bar to top or bottom of most menus and lists. Does not work in Spotlight drop-down menu. Works in Snow Leopard Mac Mail (v 4.5, anyway) if you hold it down for a second: you get an error signal, but it works anyway.
    PgUp/PgDn | Home/End: Moves highlight bar to top or bottom of menus. Moves only the view for lists.
    Opt-Spacebar: Move highlight to the alphabetically first item in a menu (except in the Spotlight drop-down menu).
    o SPOTLIGHT DROP-DOWN MENU
    After selecting an item on the drop-down menu via the arrow keys:
    Cmd-Return: Open enclosing folder.
    o SPOTLIGHT WINDOW
    If ^F7 is set to "all controls", press ^F7 four times if it's the first search in a particular Spotlight window; two, otherwise. This moves you to the list of results.
    If ^F7 is set to "text boxes and lists only", just Tab.
    o CALENDAR WIDGET
    Home: go to the current year and month.
    Up, down, left, right arrows: go one year back, one year forward, one month back, one month forward. Hold a L or R arrow down to autorepeat through the years. (You can't do _that_ with the mouse!) Also works with U and D for the months.
    o SPELL CHECK
    Cmd-Shift-; - Open spelling pane. But you knew that one. Press ^F6 to get to the pane. Tab around to everything except Define and Guess. With the focus on the replacements list, use the arrow keys to select a replacement word from the list. (Press the spacebar to run the control with the highlight perimiter and press Return to run the control with the solid highlighting.)   
    o MAIL
    Cmd-I - Open Account Info window
    o NAVIGATING HELP PANES
    - Old-style help panes :
    Open the menu pane in the usual way. The Highlight perimeter will be on the search window.
    Tab: Move among the Search field, the page, and the controls on top. Add Shift to reverse.
    When on the page:
    Use the usual navigation keys -- Up, Down, Home, End, Page up, and Page Down, Spacebar, Shift-Spacebar -- to scroll through the page.
    Opt-Tab: move among the links on the page or items in a "Help Topics" list. The highlight perimeter will be rectangular. Press Return or Enter to click the selected link or item. Add Shift to reverse direction.
    Cmd-F: Open the Find bar. Move about with Tab. Move among the arrows with L or R arrow keys. To find a string: Type your search string in the Find field. Press Return or Enter. Press again to find the next occurrence, etc.
    To go to the page from the Find bar: Shift-Tab to the left or right arrow. Then press Shift-Tab two more times.
    To close the Find bar: Tab to Done and press the spacebar or just press Escape.
    You can return to the app via ^F6, but the help pane stays open.
    When the highlight perimeter is on one of the arrow controls, press Spacebar to click it. If the left arrow control is selected press the down arrow to see places going backward. If the right arrow control is selected press the Down arrow to see a list of places going forward. Either way, use the usual menu navigational shortcuts.
    When the highlight perimeter is on the Home control, press the down arrow to get a menu of apps. (The mouse way for this is to click and hold and "drag" and release as above.) The usual menu navigational shortcuts are valid. Also, you can press the spacebar to go Home.
    When the highlight perimeter is on the Gear button, press the spacebar or down arrow to get the drop-down menu. The usual menu navigational shortcuts are in effect. 
    (To do any of the above three with the mouse, click on the control and hold the mouse button down, then without releasing the mouse button, move the pointer to highlight the desired item on the menu and release the mouse button.)
    - New-style help panes:
    Open the menu pane in the usual way. The Highlight perimeter will be on the search window.
    Tab to move around to most things. The table of contents becomes keyboard-active one Tab-press past the Search field, at which point use the L and R arrow keys to navigate the Contents panel, and the U and D arrow keys to move the down and up the page.
    To be able to use PageUp and PageDn, Tab to the back/forward button, then press Shift-Opt-Tab.
    Cmd-F: Opens the Find bar. Navigate with Tab and L/R arrow keys. When you are on this bar, press Shift-Tab repeatedly until the highlight perimeter moves out. Then press Shift-Tab once more to go back to the page, where the L and R arrow keys move you around the TOC sidebar and the U and D arrow keys scroll the page.
    AFAIK, you can choose Get Started or Browse Help only with the mouse.
    You can return to the app via ^F6, but the help pane stays open.
    AEF

    fane_j wrote:
    Pardon me if I screwed up the quoting.
    betaneptune wrote:
    I guess you meant that unfortunately these don't have them.
    (1) I meant that, unfortunately for your impressive work, there's nothing in it that hasn't already been documented elsewhere. The two items I mentioned contain your shortcuts, and many more besides.
    I don't see any of them in either. I have Pogue's book and they're not there. In fact, I emailed these tricks to him and he was impressed with some of them. And of course there are more. I never claimed to have an exhaustive list. In fact, I doubt anyone has such a list.
    (2) Mac OS X v10.6 has been out for, what is it, 3 years? If you think that there are any 'secret' shortcuts left, then you seriously underestimate the average Mac user. Some of that stuff is even older. For instance, the "Save-As Sheet Trick" dates back to Jaguar, or even before that. "Make a New File That Contains Selected Text" is older than Mac OS X itself!
    Well, first of all there's no need to get hostile.
    I don't underestimate the average Mac user. But I would venture to guess that most are heavily mouse oriented and would not even be interested in the keyboard tricks I posted. And I only said that I haven't seen them on any website, not that no one else in the world knows about them.
    Additionally, there have been things missed by the smartest of people. Stereo vision was missed even by Issac Newton, one of the biggest geniuses to grace the planet! It was Wheatstone who first picked up on it. And it used to be thought that a cube flying by at relatvisitic speed would appear foreshortened in the direction of motion. In fact, as was noted decades after relativity was published, it in fact appears rotated. If super smart people could miss this, then perhaps those few who publish these tricks may not know about them.
    Re the save-as trick and the new-file-contains-your-text tricks: fine, they're old, but so are many of the tricks that are posted on websites. So why some old ones but not others? How about cut and paste? I bet they're pretty old but that doesn't prevent people from posting them, as in your first reference!
    (3) Also unfortunately, your listing mixes up shortcuts and 'tricks' of different categories. For instance, "Copy a File to a Text Document" is not secret, it is not a trick, and it is not relevant. What an app does with a file dropped onto an open document of its own does not depend on the system, but on the app and how it was programmed. TextEdit inserts a text file's path while TextWrangler inserts the text file's contents not because the former is not a text editor, but because that's how each was designed to work. And TextWrangler's behaviour is documented in the accompanying manual.
    So one of my tricks is lame. I don't see that as that big a deal.
    Bottom line: I don't see any of them in the docs you referenced.
    AEF

  • Set file extension (.jpeg)

    Hi folks
    I made a VI where I capture pictures whenever I hit a boolean and they are automaticly saved in a map with the names 0,1,2,3,4,5,... ,  but that ain't important for my problem. I use the IMAQ write file2.vi to write my file to the set file path. This .vi got the function to save the image as a .jpeg, .bmp, .png, ... . I select the .jpeg extension, but when I go to the files they are shown like this (without the .jpeg extenstion).
    I guess that there is an easy solution to add the .jpeg extension after the file name, but I can't find anything.
    Besides that there is something else which stands apart from the .jpeg issue.
    I am taking a lot of pictures under different angles of an object and also save them like the example above. After I took all the pictures I want to proces them, so in order to do that I have to save the images together with the angle. I know that in NI Vision calibration information can be saved with the pictures in a .png file, but I was wondering if this also could be done with your own information. If this is not possible, is there maybe another solution to this?
    Kind regards
    Ruts
    Solved!
    Go to Solution.

    Append the string file name with ".jpeg" (without quotes).  Do this by using the concatenate string on the string palette.  Post some code if you are having troubles.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

Maybe you are looking for