Focal distance camera

Hi everyone, I am making a science project about the iPhone and I was wondering what the focal length is for it's camera.

It's faster to google "focal length iPhone".

Similar Messages

  • Focal distance in iSight camera

    Dear apple support
    kindly advice me why there is difference in focal distance between photo and video in iSight camera in all virgin of Iphone?

    I noticed the same problem in my iphone too. It differs when you switch from photo to video!

  • Image Focal Distance

    Hi All, Help please...
    Using Adobe Photoshop CC (CS6) but when I go into image details it does not show Image Focal Distance.  Previously it used to show distance given in metres using CS4.
    Kind Regards
    Nikki

    Thanks guys. Sorry for the delay been mega busy!
    I cant really show screen shots to you as I was using a old computer with CS4 previously. (it isnt setup anymore) My new pc is now using creative cloud photoshop. Camera raw is still used and the settings on the canon camera are still the same.  The only difference is the canon software I previously used on my old pc as I now use Bridge I dont need that anymore.  But I didnt do any editing in there anyway I just used it to browse the photos.  Although updated I still do all the same editing processes that I did in CS4.
    Thanks for your help

  • Moving and Tilting the Camera for an optimum Lens Profile Creator image set

    When framing the chart in different areas of the image frame, use a combination of physically moving and tilting the camera to achieve an optimal balance for LCP generation.
    The following two passages are from the AdobeLensProfileCreatorCalibrationChartShhtingGuide.pdf
    Page11/a -Move camera a bit to the left (so that when turning to the right to face the chart, it is about 10 to 30 degrees). Take a series of shots similar to the first three, above, except that the chart is framed at the center-left, top-left, and bottom-left areas of the image.
    v. Move camera to the right, and do the same for the center-right, top-right, and bottom-right areas of the image
    pg11/e - When framing the chart in different areas of the image frame, use a combination of physically moving and tilting the camera to achieve an optimal balance for LCP generation.
    i. Only moving the camera to frame, so that image plane stays perfectly parallel to the chart, can have an adverse affect on LCP calibration data.
    ii. Only tilting the chart may cause depth-of-field issues, where part of the chart may go too far out of focus due to the large angle of the chart in regards to the image plane. This can also have an adverse affect on LCP calibration data.
    Does this mean to center the camera with the center/center shot so that the film plane is parallel with the calibration grid, then only use camera tilt on the tripod up and down for the top center an bottom center image. Then move the camera/tripod left (so that when turning to the right to face the chart, it is about 10 to 30 degrees) AND also pan the camera left to shoot the left top center and bottom image.
    I think this combination of instructions have me stuck. Move left but not so much that the calibration target image is parallel with the film plane then pan the camera to get the desired framing.
    Would I be correct to say - move left until the angel to the calibration image is 10-30 degrees and then use camera pan on the tripod to get the image framed properly. ... Is the desire to move as little as possible, or pan as little as possible, or to balance moving and panning in some way. 
    I am profiling a Nikon D7000/TAMRON 11-18mm F/4.5-5.6 lens and have a large target 36"x48" and read that I should shoot at minimum focus distance, 3x minimum, and 5x minimum which equates to 9.8 inches, 29.4 inches and 49 inches... I use this combo to shoot home interior shots at an focal distance of more like 10-20 feet from surrounding walls.. would I also need shots at (120 inches?) 10 feet?
    I really (really) want to get the distance to subject and camera moving/pan combination right.
    Please help. 

    There are a number of things darks/flats/bias can't remove: curvature, coma, pincushion, chromatic aberration. Some of those (pincushion i.e. showing stars on the chip in a slightly different position from their true position) may or may not be correctable with this sort of an app. It's possible using outside data (comparing the star locations in the image to their true locations in a catalog - that's a common astrophotography measurement and only takes seconds). Correcting curvature (differing focus off axis) chromatic aberration and coma (distorted star shape off axis) seem just what this app is about.
    Printing a checkerboard on a nearby hill top sounds involved. Maybe I could get them to plough it in a regular pattern? :-)
    The camera does not capture the data you mention. The images are usually in FITS format which has an enormous amount of data about the focal length, exposure time, amount of atmosphere through which you are looking, etc.
    Calculating from the book value for the lens(es) camera chip etc would likely not be give a better result than the value obtained before correction. In some designs of scopes, particularly the most common ones, the focal length changes as you focus for example. The lenses and mirrors, comparing one scope to the next, are not perfectly identical.
    But a dense star field would be something against which you could measure changes in star shape, focus, chromatic aberration and possibly even position. Once you figured out the parameters for your particular scope+corrector lenses+camera setup you could then use that profile thereafter.
    I'd love to chat with that engineer  I had heard you have one person who got a Meade SCT and started astroimaging just to learn the things we need and most astroimagers do use photoshop.
    Drew S.

  • Crummy Camera Lens Blur CS6

    The Camera Lens Blur effect that came with AE CS 5.5 and CS 6 doesn't look nearly as good as the old Lens Blur effect that came with CS 5; it's pretty much unusable for my purposes. Foreground elements that should be sharp render sharp, but with a blurry halo around them. The old Camera Lens blur does not exhibit this behavior, nor the Frischluft Depth of Field plugin.
    Here's an image showing the depth map and how the three plugins render it:
    CameraLensBlur.png
    The images are composited from 16-bit PNGs rendered out of Cinema 4D. There's a nasty halo visible around the foreground particle in the upper left in the CS6 Camera Lens Blur version. I saved the old Lens Blur effect as a preset out of a CS5 project and am, fortunately, still able to use that approach in CS6 since the "improved" version of the plugin looks pretty crummy (I skipped AE CS5.5). As a test I installed the demo of Frishluft's Lenscare plugins, and after inverting the depth buffer in the plugin I got a good result.
    The "new" Camera Lens Blur is acting like the Compound Blur effect. Foreground elements appear sharp but with a blurry halo since the entire "background" is being blurred, creating a blurred ghost of the foreground element. The old Lens Blur effect was very slow, but it did work.
    Anyone know how to make this new "improved" Camera Lens Blur look like the other plugins?  I've posted a zipped file with the image and depth map in a small AE CS6 project here if anyone here would care to check my settings.  Maybe I'm missing somthing.
    CameraBlurs folder.zip
    Thank you.
    Shawn Marshall
    Marshall Arts Motion Graphics

    It's been now more than 2 years now that Shawn referred to you this "bug". Thing that I doubt to be a bug. However, after 2 years now, and being on CC 2014.2, this is still ugly when using a depth map. The bokeh is nice, I give you that. But you clearly can't use it with a Dmap. I know it's pretty useless to tell you things that you already know, either make a bug report. It's clear now that it won't get any update.
    And as someone here said, if you can't/don't wonna fix it, why can't you simply pack the old Lens Blur from CS5 as an obsolete effect ? Or let's go crazy, just buy the Frischluft (Dat name...) blur which works and pack it.
    I would love, of course, to hear from you that you're on it and that a new release is coming soon... Anyway.
    Just to remind you, here some pictures of what we are experiencing with your Lens Blur using a depth map.
    Halo around the lens on the middle until the screw top right... and cannot be fixed by any way with the focal distance. But yes, the bokeh is really nice.
    And here with the Frischluft
    It simply works.
    You could say then, why am I not buying it ? I'm considering it a lot now.

  • Why no Motion Blur when animating a camera?

    I think I'm in trouble. I'm finishing a project where a logo (let's call it a box) was built by creating all the sides using solids, then making these solid layers 3D. I then move the CAMERA around the box. That seemed to be a lot easier than trying to move all the 3D layers in front of a camera.
    I cannot get any motion blur effect when the camera moves fast. I've tried numerous settings (and of course have the layers enabled for motion blur). Is it not working because the layers aren't in motion? Can I acheive motion blur by animating a camera around 3D layers?

    Thanks, but I do have motion blur on for the composition and the layers and the render queue. There is no "switch" to turn motion blur on for a camera layer, and of course no 3D switch for a camera because it's inherently 3D.
    I used various motion blur (advanced) composition settings, as well as adjusting the camera's apature, fstop, focal length, focal distance, etc. Can't seem to get any motion blur effect. The camera occasionally whip-pans, slows down, speeds up - so it's certainly moving fast enough.
    Just wondering if I should have created a Null layer, parent it to the camera and animate the Null because that does have switches for 3D and motion blur.
    Why wouldn't motion blur show when animating a camera around an object?

  • Driving multiple digital video cameras...

    I want to set up a kitchen as a studio for a cooking show. I will need about a dozen fixed angle/focal length cameras, half a dozen monitors, and half a dozen or so audio sources. I want to capture all the digital input in a synchronised manner. Can I do this using off the shelf components? And if so, what kind of gear do I need to put this together around a high end Mac ?

    The only problem with that many cameras trying to go into a MacPro at one time is that Final Cut will only recognize one camera at a time. There is no way to record all twelve cameras at the same time using Final Cut, unless you record them to tape or some other device first. However, if you get a video switcher (especially one that supports FireWire output), you may be able to connect all the cameras to the switcher and send that to the MacPro. The only downside to that is you will be switching the cameras on the fly, and you won't be able to adjust your mistakes without either cutting pieces out or reshooting parts of the show. Also, a video switcher is completely different from a FireWire switcher. A video switcher relies on synced video sources (some do the synchronization themselves) and can switch from one camera to the next without rolling the video or losing the signal in any way. A FireWire switcher simply switches the FireWire sources, and there will be an unavoidable break in the signal. A video switcher that handles 12 inputs will be expensive. The ones I've looked at that work well are around $30,000, but they can do some pretty cool stuff.
    The word "dockable" means that the camera can be configured for studio CCU connection or can be refitted to hold a tape or hard drive recording device. If you have 12 cameras that can each record onto MiniDV or DVC Pro, then you will not need to purchase extra equipment. However, capturing 12 tapes of 30 minutes each will be more than 6 hours of just capturing video. After that, just sync the clips and make multiclips out of them. If you connect one or two mics to some of the cameras, you can then use Final Cut or Soundtrack to do the final audio mixing. No extra equipment would be required. All it would take is a bit of work.
    However, if the cameras do not have recording units, like the lipstick cameras, you will need a switcher that can handle all of those inputs, or recording devices for each camera. Honestly, I wouldn't recommend those lipstick cameras, as they are pricey and they require a CCU (IK-CU51A), a lens (JK-L15M2) and a cable (EXC-4302 or EXC-4330) in order to function. You're looking at $1,714.80 just for one camera setup, and that's before the switcher. Plus, after you purchase the switcher, if it doesn't output to DV via FireWire, you may need to get a capture card or device for the MacPro. You could buy a HDR-HC3E Sony HDV camera and have the same specs and save around $600 each camera. With that savings, you can buy a Mackie 1402-VLZPRO audio mixer, and some sort of digital recoding device, a clapboard, and then you should be just fine. Granted, the HC3E it isn't the greatest camera in the world, but it should give you the same quality as the Toshiba lipstick camera that you are looking at without the headache of finding a way to record the image. If I were you, I wouldn't go the "surveillance camera" route, and go with something more Prosumer or Consumer level. You'll get the same quality at a lower price. Make sure that the autofocus can be shut off, and that you can manually white balance (common in most cameras nowadays). This is just something to consider.
    If you want, get my e-mail address from my profile and send me an e-mail. I've set up many studio systems and field systems in the past, and I currently do this presently as well. I'm sure I can help you find a way to accomplish what you intend to do.

  • Can 5 axis image stabilization be engaged with non system lenses for A7II and A7rII?

    Many people use/will use their excellent non system manual focus non system macro lenses on the A7II and undoubtedly on the soon to be released A7RII. Stabilization in the X/Y planes is very very important for macro. Sony on the A7II only stablizes the X/Y planes for lenses that report the focus distance as best I can tell. Perhaps the lens compensation app or another can be modified so this data can be manually inputted or even change the firmware to allow this and thus stabilize all 5 axes. Focal length will need to be inputted of course as well. Perhaps there is another way around this that I have missed. Thanks for any comments.

    Thank you for taking the time to answer.  I have done more digging in reviews and recalled sony's announcments which implied on 3 axis IS is available w/o a lens that reports focsing distance. Not my favorite source, but review from PCMag: "If you're using a non-native lens, or even a native manual focus lens like the Zeiss Loxia 2/50, it's stabilized along three axes—yaw, pitch, and roll. There's a technical reason for that. In order to compensate along the x and y axes, the camera needs to know the distance to the subject, which requires electronic communication of the focal distance from the lens to the body. To compensate for the other three axes, the only data that needs to be transmitted is the focal length of the lens. Even if you're using a purely mechanical Leica lensit can be stabilized—you're able to manually enter the focal length via a menu. The only real downside to this is that the A7 II does not add that focal length to the recorded EXIF data, so if you like to track which lens a shot is captured with, you'll need to take notes." http://www.pcmag.com/article2/0,2817,2475439,00.asp A friend informed me that on his A7II he must mount a non OSS Sony lens first to have any image stabilization with any of his totally manual lenses--seems the camera saves the settings of the last lens attached.  Turning the cam off doesn't reset this--very odd---seems to be a bug in the firmware.  I hope the A7rII is different. This whole issue is quite confusing.  Appreciate any additonal clarification.

  • Allow 5 axis image stabilization for non system lenses in A7(r)II series

    Many people use/will use their excellent non system manual focus non system macro lenses on the A7II and undoubtedly on the soon to be released A7RII. Stabilization in the X/Y planes is very very important for macro.  Sony on the A7II only stablizes the X/Y planes for lenses that report the focus distance.  Perhaps the lens compensation app can be modified so this data can be manually inputted or  even change the firmware to allow this and thus stabilize all 5 axes.  Focal length will need to be inputted of course as well.   If the new field is left blank, no action is taken as not to affect users not interested in this feature.  Olympus somehow allows all 5 axes to be stabilized just by putting in the focal length but it is not clear how they pull that off.  Thank you for considering this.

    Thank you for taking the time to answer.  I have done more digging in reviews and recalled sony's announcments which implied on 3 axis IS is available w/o a lens that reports focsing distance. Not my favorite source, but review from PCMag: "If you're using a non-native lens, or even a native manual focus lens like the Zeiss Loxia 2/50, it's stabilized along three axes—yaw, pitch, and roll. There's a technical reason for that. In order to compensate along the x and y axes, the camera needs to know the distance to the subject, which requires electronic communication of the focal distance from the lens to the body. To compensate for the other three axes, the only data that needs to be transmitted is the focal length of the lens. Even if you're using a purely mechanical Leica lensit can be stabilized—you're able to manually enter the focal length via a menu. The only real downside to this is that the A7 II does not add that focal length to the recorded EXIF data, so if you like to track which lens a shot is captured with, you'll need to take notes." http://www.pcmag.com/article2/0,2817,2475439,00.asp A friend informed me that on his A7II he must mount a non OSS Sony lens first to have any image stabilization with any of his totally manual lenses--seems the camera saves the settings of the last lens attached.  Turning the cam off doesn't reset this--very odd---seems to be a bug in the firmware.  I hope the A7rII is different. This whole issue is quite confusing.  Appreciate any additonal clarification.

  • Nikon Nikkor 18-55 (non-VR), 55-200 VR

    I shot images for and made two profiles, available here (and also submitted to Adobe).
    NIKON 18.0-55.0 mm f/3.5-5.6
    NIKON 55.0-200.0 mm f/4.0-5.6
    Both were shot under similar conditions:
    indirect evening sunlight (even and relatively bright, but possibly changing in intensity)
    8.5"x11" print on plain paper (crisp and high contrast, but subject to a little warping)
    from a tripod (only pivoting the camera during each iteration, not panning)
    one focal distance per focal length, all at f11 and ISO 100
    nine images per shooting iteration (except one of the 18-55's iterations, for which I forgot one image)
    three iterations for the 18-55 (18, 35, 55); five for the 55-200 (55, 70, 102, 135, 200)
    RAW images (NEF, converted to DNG for the profile creator)
    default settings except for entering dimensions for the profile creator
    using a Nikon D40x
    I posted a few example images are included as well, in case they are of interest. (If anyone is interested in the RAW images, I'd be glad to share those as well.)

    I have moved the profiles (retiring the server which I had them on before): http://www.markfickett.com/lensprofiles/.

  • EXIF data - Why is so much is missing in LRx but in RAW / DNG / CR2 images

    I've seen the limitations in the EXIF data as shown in LR. I've seen the comments that it would be SO useful to sort / filter by all the EXIF data available in LR.
    However LR shows a small proportion of the information that is available in the Meta data but there is missing EXIF data, that is of use to users, not every day, but it is there and with the aforementioned improvement of searching / sorting / grouping / stacking, I'd like this data to be accessible.
    I was looking to find all the correctly exposed images in sets of 3 auto bracketted images -2, 0, +2 from todays shoot but this info is not listed in LR
    So I looked in PhotoGrok, and it told me that the image I was looking at was AEBBracketValue -2, ie 2 stops under exposed in a set of auto bracket.
    It also told me my camera was 37º when I was shooting in Cape Town this morning. It most certainly was not that warm outside.
    My reason for finding the correctly exposed images in the sets was to see if they had sufficient exposure data to be used as one image rather than combining the 3 bracketted images using fusion / enBlend software to rescue the shadow detail and highlight detail in my shoot.
    If you do play with PhotoGrok, you see more info than may be / is / could possibly be useful. But it does list a great deal.
    I do get the basics
    exposure
    aperture
    focal length
    35mm equivalent
    focal distance
    hyperfocal distance
    colour balance
    AEBBracketValue
    and the list goes on.
    More importantly, this data has been extracted by LR from Canon 5DII CR2 files to DNG, so LR can see all this info.
    Anyway, that is my request.
    There is limited EXIF data displayed in LR, there are requests for sorting / searching / listing / stacking by more of the meta data that LR accesses.
    I would like to see much more of this data available to be sorted / searched / stacked, so I can find amongst all the EXIF data, the AEBBracketValue = 0, ie correctly exposed images.
    Rob Cole  - Does your anyfilter access more of this info?
    Hillrdg- you asked me to give my motivations for making suggestions, so I''ve taken your advise on board and to substantiate my argument.
    yes, there is a workaround - It is to sort all the photos by capture time or file number and arrange them into 3 images in a grid view and just select the correct column, but this is slow. More importantly, this request is about vastly improving access to EXIF data and for other uses of the vast amount of EXIF data available in images.
    hamish NIVEN Photography

    EricInsalaco wrote:
    Now I was under the impression Preview/Versions only take up a miniscule space on the HD since they don't FULLY copy the master file, they more create a bridge between the two that amounts to a few MB. Is it possible that it's taking up a much larger amount than intended here (a la the same basic mail attachment problem, but with Preview)? If so, what directory would that be located in?
    Apple's documentation on Versions and what is kept in the Versions database is misleading, in my experience, though others will argue to the contrary.
    In any case, all the Versions files and the database that keeps track of them are in an invisible folder in the root directory called
    .DocumentRevisions-V100
    The easiset way to check its size is to show invisible files by pasting the following command in Terminal:
    defaults write com.apple.finder AppleShowAllFiles TRUE; killall Finder
    press 'return' on the keyboard.
    Now navigate to the hard disk in Finder and you should see DocumentRevisions and its size.
    To turn off invisible files, use the same command as above but replace TRUE with FALSE.
    Message was edited by: softwater

  • Focus issues

    Have a SX poweshot 160 IS that I use during home exterior inspections. Recently when I try and take pics of the roof shingles, the camera wont fully focus and the pics are blurry. Just started doing this. Hate to have to buy a different camera. Any tips?

    Several questions for you:
    are you working in very cold weather? My SX150 has a problem focusing if I try to use it too quickly after taking it out of my pocket and the lens "fogs" up preventing a good focus. Usually need to wait a few minutes for mine to adapt to the cold.
    are you zooming in to get greater detail? you may need to either move back (without falling off the roof) or use less zoom. I found that I need to be aware of the focal distance more so when zoomed.
    Are you using a small or large aperture? (manual mode). try setting the aperture to f8.0 if you are in manual mode.
    is the focus assist beam turned on? Check the menu to see if it's off.
    The contrast idea is the most likely culprit. I use an index card or piece of paper to help with the focusing. The SX160 uses contrast detection so if there is not much color variation this helps a lot to achieve focus. Just place the index card on the roof in the area you want to take the shot. (I'm assuming you're not taking all the shots from a ladder only).
    As a last resort you could try using the manual focus feature of the camera, but it is a real pain on that camera. 
    Steve M.

  • Is this camcorder campatible w/Mac?

    These are the specs for the Sony HRC42:
    Specifications
    Imaging device: 1/5.5-in. 1070K Gross Pixel CCD
    Video actual: 690K pixels
    Still actual: 1000K pixels
    F: 1.8-2.5
    Focal distance: 3-350mm (Telemacro)
    35mm Conversion: 48-576mm (4:3)-Camera; 46-628.5mm (16:9)-Memory; 40-480mm (4:3)-Camera
    Filter diameter: 25mm
    Zoom: 12x (optical); 480x (digital)
    Lens: Carl Zeiss Vario-Tessar
    Focusing: Full Range Auto/Manual (Touch Panel)
    Image stabilization: Yes, Super SteadyShot
    Minimum illumination: 7 Lux (0 Lux with NightShot Plus Infrared System)
    Low light capability: Super NightShot Plus
    Shutter speed: 1/2-1/4000 (AE Mode)
    Viewfinder: Color, 123K pixels
    LCD: 2.7-in. Color (123K)
    Accessory shoe: Yes, active interface
    Video input/output: Yes/Yes (Special)
    Audio input/output: Yes/Yes (Stereo, Special)
    Interface Connector for Handycam Station: Yes
    Computer interface: USB (on Handycam Station)
    LANC (accessory) terminal: Yes
    Memory card compatibility: Memory Stick PRO Duo
    White balance: Auto/Outdoor/Indoor/One-push
    Exposure: Yes, Touch Panel (24 steps)
    Power consumption (VF/LCD/VF+LCD): 2.6W/3.0W/3.2W
    Dimensions (WxHxD): 2.25 x 3.63 x 4.5 in./54.7 x 90.0 x 111.7 mm
    Weight: 14.5 oz./410 g without tape and battery

    Or did you actually mean "Sony DCR-HC42"?
    Then this page:
    http://audiovisual.kelkoo.co.uk/b/a/ps_12468109/123501.html
    says "Connectivity FireWire , USB , Analogue Input"
    and "Media type Mini DV"
    So I would say: Yes a Sony DCR-HC42 should be compatible, but try before you buy.

  • Blur layering problem

    Hello,
    I've been doing some work on a project very similar to andrew kramer from videocopilot(54. Advanced Camera Tips) from his site. The problem shows up after I render the project. I put use the camera's depth of field to make some nice looks for the trapcode particular particles,and use some 3d simple text to be animated. After I render instead of showing only the particles blured out, the text and video or everything I put there shows up blury, I know it's because of the camera but why? Shouldn't the text be unaffected by the depth of field, or videos? Please help! This is what happen's.

    The text should definitely be affected by the blur. It's a 3d layer. If you follow the tutorial, I think he shows you how to keep the text from being blurred. I don't like it though. It's one of a few times when I disagree with Andrew Kramer. I think that having everything reacting to the camera's depth of field helps to sell the effect that what you're looking at is 3d. Just animate the focal distance of the camera to bring the text into focus. If the background isn't blurred enough when you do that, work with the camera settings to get a shallower depth of field.

  • LPC - Distortion, CA, Vignetting factors

    For lens profile creation tool, there are three variables that can be altered (versus static):
    - Focal Length
    - Aperature
    - Focal Distance
    So that I can understand how to shoot my image sets -- what types of image sets (varying focal length, varying aperature, varying focal distance) are important to be included for creation of a decent profile for each of the following corrections?
    For instance, I understand (hopefully correctly) that for geometric distortion, varying the focal length is important, but aperature is not and focal distance... well, I'm not sure.
    - Geometric Distortion:
    - Chromatic Aberration:
    - Vignetting:
    Thanks in advance!
    Avery

    Thanks, Simon.
    So, if I was able to shoot some image sets with a zoom lens on a Sony NEX-5N (before I sold it), using different focal lengths at F8.0 -- I should create a profile for geometric distortion and CA, omitting vignetting? (as I understand, focal distance isn't taken into affect for this camera)

Maybe you are looking for