Camera distance

Hello there,
I am working on a project for a class. Part of what I would like to do is have a webcam find a target and tell me how far away it is. What makes this more complicated, unfortunately, is A) I would like the distance measurement to be largely automated (I understand the clamp tool is the way to go for doing distance from camera but I am not sure how to automate it) B) The target is to be between 2 and 5 meters away from the camera. I have looked through nearly all of the example VI's but don't see any that work at distances greater than perhaps a foot.
The target I will be finding is up to me so obviously I want to pick the easiest thing which I would imagine to be a concentric ring "bulls-eye" pattern of about a foot square. At a distance of 2-5 meters, I have difficulty picking this object from the background through any means, let alone finding the distance it is away.
Thank you for your time in reading all of this and thank you in advance for your help,
-Kevin

Here is a pattern I created a while back that I would consider searching for.  It provides rotation information as well as position information.  If you just need position, you could just use a 2x2 checkerboard pattern.  Deciding the size of the template is simple - make sure you can see the whole thing at the closest distance.  Make sure it is large enough to see the pattern at the far distance.  It is probably easier to use the smallest size that will work at the far distance.  Something that fits on a standard sheet of 8.5x11 paper is probably easiest - I would probably make the squares 3 or 4 inches square.
The neat thing about these patterns is that you can use pattern matching, and it should work at any scale.  If you create the pattern template at the largest distance, and don't go all the way to the edges, it should work for any pattern that is larger (closer).  Geometric Pattern matching might work well.  Once you find it, measure the pattern by perhaps locating the edges of the black square.  The distance from the camera should be proportional to the size of the target.  You can verify it easily.
Bruce
Bruce Ammons
Ammons Engineering

Similar Messages

  • 3D Picture Control Orthographic View zoom - Camera Set up?

    I am trying to display two 3D picture controls.  One displays some objects in 3D (Auto Projection: Perspective, Camera Controller: Spherical).  This 3D image works great and I am having no problems with it.  The other Picture Contol is meant to simulate a 2D image of the 3D object, essentially a Plan View.  To do this I made a 3D picture control with a Orthographic view.  I set the camera to view from above (z-axis) and set the target to be positive y-axis. 
    I have attached an example vi (labview 2013 SP1) that creates some simple 3D objects and displays them in Perspective and Orthographic using the same Camera Position.  On initial start, the Orthographic image is zoomed in heavily. If I manually zoom it (Hold Shift button and mouse up or down), then the Orthographic image will set itself to the correct Camera position.  This is essentially the solution to my problem, but I can't assume the user will have the ability to manually zoom the image.  So I need to set it programmatically.
    If you manual adjust the view on either screen (rotate and zoom), then click the Set Camera button, both images will give a plan view, but only the Perspective Image will be at the correct camera zoom or distance.  The Orthographic stays at the last zoom distance.
    Is there any way to set the Camera distance for Orthographic projection programmatically?   I have tried starting with the 3D picture control as perspective, set the camera distance and then changing it to orthographic, but this still results in a zoomed orthographic image.  From the reading I have done, I think I am trying to the impossible.  Does any one have another suggestion to get the same result.  I have previously used the 2D picture functions to generate the image but found that method inefficient. 
    And suggestions would be great.
    Thanks
    Relec
    Attachments:
    3D_projection_Test.vi ‏38 KB

    Hi Relec,
    thank you (and Kudos) for that inspiring demo!
    Took me while to work it out, but I might have found something that could help us with the orthographic projection.
    Sometime back someone posted a way to collect the current Camera data. Sorry I lost the source. But here is my version as a Get3DCameraPosition.vi using the ModelViewMatrix.
    Unfortunatly this doesn't return the current Target and so a bit of guessing in the right Direction is necessary.
    I'm impressed you found a way to make the Shift-Drag Zoom-movement possible on the orthographic window.
    It seems a whole thread at http://forums.ni.com/t5/LabVIEW/3D-Picture-Control-Doesn-t-Zoom-in-Orthographic-Mode/td-p/1544460/page/2 wasn't able to find that out.
    In the meantime I've found a way using the ProjectionMatrix to create a zoom-effect which I build into your demo using the mouse wheel.
    All in all I've tried to translate camera movements from the perspective window in the ortho - assuming this is what you were after?!
    Again the missing target information doesn't make it quite right.
    Regs
      Jannis

  • Intuitive mouse camera rotations

    Hello,
    I am fairly new to Java3D so please forgive this post if its already been asked/answered.
    I am trying to get an intuitive mouse movement paradigm working in my Java3D application. I want to be able to move the scene graph's camera around an object centered around the origin of the scene graph. So essentially the object does not get moved, only the camera. The trick is that we want to move the camera around in a spherical coordinates fashion, that is, pan the camera left and right, tilt the camera up and down, etc. The camera distance will remain the same from the center of the universe, but will be "orbiting" around the object.
    I have successfully gotten these transformations done simply by translating the "pan" and "tilt" degrees (which is what the user changes to move the camera about), and the application works just fine. Except when the camera gets to one of the "poles" of the sphere in which the camera is moving on. When down by the poles, the camera movement is very unintuitive.
    I was wondering if anyone had success in a completely intuitive mouse movement paradigm (ala Selman's Tornado mouse listners) but for moving a camera about an object. It seems one would need to use a great cirlcle calculation instead of just converting angles into rectangular coordinates using the spherical coordiante conversion equations.
    If anyone can point me toward some tuts or ideas it would be most appreciated. Thanks in advance!

    //Below is an working example of Camera manipulation.
    //This may be the sort of idea you are after,and
    //gives the ability to simply add Walk,Pan,Zoom methods.
    * @author H.JONES
    import java.awt.BorderLayout;
    import java.awt.event.*;
    import javax.media.j3d.*;
    import javax.vecmath.*;
    import com.sun.j3d.utils.universe.*;
    import java.awt.GraphicsConfiguration;
    import com.sun.j3d.utils.applet.MainFrame;
    import com.sun.j3d.utils.geometry.*;
    public class HJCameraTest extends javax.swing.JApplet implements AdjustmentListener
    private SimpleUniverse su= null;
    public float m_x = 0;
    public float m_y = 0;
    public float m_z = 0;
    public float m_roll = 0;
    public float m_pitch = 0;
    public float m_yaw = 0;
    public float m_focal = 20;
    private javax.swing.JScrollBar scrollYaw;
    private javax.swing.JScrollBar scrollPitch;
    public HJCameraTest ()
    this.getContentPane ().setLayout (new BorderLayout ());
    GraphicsConfiguration config = SimpleUniverse.getPreferredConfiguration ();
    Canvas3D canvas = new Canvas3D (config);
    this.getContentPane ().add ("Center",canvas);
    su=new SimpleUniverse (canvas);
    su.getViewer ().getView ().setBackClipDistance (100);
    su.addBranchGraph (createSceneGraph ());
    initComponents();
    public BranchGroup createSceneGraph ()
    BranchGroup objRoot = new BranchGroup ();
    for(int i = 0;i < 10;i++)
    for(int j = 0;j < 10;j++)
    TransformGroup tg = new TransformGroup ();
    tg.setTransform (setTransform (i-4.5f, j-4.5f, 0,0,0,0));
    ColorCube cube = new ColorCube (0.45);
    tg.addChild (cube);
    objRoot.addChild (tg);
    return objRoot;
    public Transform3D setTransform (float x,float y,float z,float pitch,float yaw,float roll)
    //method to rotate then position
    Transform3D ret = new Transform3D ();//return transform
    Transform3D xrot = new Transform3D ();
    Transform3D yrot = new Transform3D ();
    Transform3D zrot = new Transform3D ();
    Transform3D pos = new Transform3D ();
    //roll pitch yaw rotations by Degrees
    yrot.rotY (roll / 360*(2*Math.PI));
    xrot.rotX (pitch / 360*(2*Math.PI));
    zrot.rotZ (yaw / 360*(2*Math.PI));
    //multiply return values by rotation
    //(order of multiplication is important dependent on coordinate system being used)
    ret.mul (zrot);
    ret.mul (xrot);
    ret.mul (yrot);
    //multiply return values by translation
    pos.setTranslation (new Vector3f (x,y,z));
    ret.mul (pos);
    return ret;
    public void CameraByTransform (float x,float y,float z,float yaw,float pitch,float roll)
    m_x = m_x + x;
    m_y = m_y + y;
    m_z = m_z + z;
    m_roll = m_roll + roll;
    m_pitch = m_pitch + pitch;
    m_yaw = m_yaw + yaw;
    Transform3D rot1 = setTransform (m_x,m_y,m_z,m_yaw,m_pitch,m_roll);
    Transform3D rot2 = setTransform (0,0,m_focal,0,0,0);
    rot1.mul (rot2);
    su.getViewingPlatform ().getViewPlatformTransform ().setTransform (rot1);
    public void adjustmentValueChanged (AdjustmentEvent e)
    m_yaw = -scrollYaw.getValue ()+90;
    m_pitch = -scrollPitch.getValue ();
    CameraByTransform (0,0,0,0,0,0);
    public static void main (String[] args)
    new MainFrame (new HJCameraTest (),400,400);
    private void initComponents()
    scrollYaw = new javax.swing.JScrollBar ();
    scrollPitch = new javax.swing.JScrollBar ();
    scrollYaw.setMaximum (90);
    scrollYaw.setMinimum (-90);
    scrollYaw.setVisibleAmount (0);
    scrollYaw.addAdjustmentListener (this);
    this.getContentPane ().add (scrollYaw, java.awt.BorderLayout.EAST);
    scrollPitch.setMaximum (360);
    scrollPitch.setOrientation (javax.swing.JScrollBar.HORIZONTAL);
    scrollPitch.setVisibleAmount (0);
    scrollPitch.addAdjustmentListener (this);
    this.getContentPane ().add (scrollPitch, java.awt.BorderLayout.SOUTH);
    scrollPitch.setValue (45);
    scrollYaw.setValue (45);

  • Transparency and color falloff with depth effect?

    Videocopilot's 3D Falloff effect isn't compatible beyond CS4. How would I create this effect in CS5 or above?
    http://www.videocopilot.net/tutorials/3d_falloff/

    It's an easy expression involving distance calculation with length(point A, point B) and then feeding the result into a linear() on whatever property you need it, in this case obviously camera distance tied to a DOF filter or whatever... So in this case it would look like
    camPos=thisComp.layeer("Camera").transform.position;
    layPos=thisComp.layeer("XYZ").transform.position;
    camDist=length(camPos,layPos);
    linear(camDist(minDist,maxDist,minBlur,maxBlur)
    Just fill in values for the min/max stuff...
    Mylenium

  • I know.. not another film look question, but here is goes...

    Hi all i hope you can help me here.
    Im used to doing safety, corporate, induction videos so Ive never been interested in getting the "film look" But now Ive got a music video to do and i really don't want it to look interlaced.
    Now i did some test on my HVX200 25p, and read up on it a little. Which brought me to my question.
    I know that when shooting with real film cameras AND HD 25p cameras there is the rule not to pan very fast because it becomes jittery. But when i did my tests outside with my HVX200 25p PAL on a tripod (no panning) my footage seem a whole lot more jittery that film movies i see on tv. Its so jittery i cant use the footage! im to scared to do the music video on 25p.
    So now i want to know, am i doing something wrong? Is the shutter speed or the way you capture in FCP got anything to do with the jittery? Or is 25p just that jittery?
    Another question. Does anybody have some tips to get a better look and feel to my music video? plug ins? or whatever?
    Thanks so much for your replies.
    Andre Meyer

    Hi Meyer,
    The reason I said your video still looks jittery is because you have not graded or used any additional adaptors on your cam. Turning on a progressive shutter alone will just make your VIDEO look like 24/25P video! The movement is greatly exaggerated! The point I was trying to make is that once you have graded your footage and if you are lucky enough to use a 35mm adaptor, much of that jitteriness (if that’s a word?!) will not be so noticeable. Remember, if it goes away completely, you are back to square one almost! Its about how acceptable you can get with the 3 components I was talking about in my previous post.
    I dont own an HVX200, but I have a V1E (not as good as the HVX200) and I find the cinema Gamma not that great as a default.. I have to tweak it quite a lot to achieve the all important contrast and saturated colours. As for your HVX, just have a play until u get a look u like. The Cinema Gammas on the Cheaper Canons are dreadful and make the cam lose all their contrast! I guess this varies a lot with the mid-priced prosumer cams?!
    You don’t need to own a 5D MkII (an ergonomic disaster in my opinion unless u have the full rail kit!) to have that 35mm DOF control. There are quite a few 35mm adaptors out there that have attachments for many types of video cameras. There are users in this forum I think that use these types of adaptors with the HVX200’s. However, such equipment is not cheap.. and of course a good quality stills lens’s are not exactly cheap either!
    So to summarise:
    1. Use a progressive frame rate 24/25P
    2. Colour Grade to film... Best bet is to use Apple Colour... this is a really good colour corrector which has various preset that’s you can tweak and work pretty good as the film look. Of course you will need to experiment to get the effect u want. Alternatively, you can use Magic Bullet Looks.. which can be use used as a Plug-in to FCP, and is like Apple Colour on Steroids! It doesn’t offer the same amount of control as Color, but has some really interesting presets that can be easily tweaked.. and the good things is, this is all done without leaving FCP.. so its quick.
    3. If you want to go the whole way and go for a 35mm adaptor.. then do your home work. Do you currently own a select of still lenses? Canon L series or Nikon Pro series? Of course u don’t need the pro lenses, but in this set up the quality of your stills lens becomes really critical! Take a look on YouTube and see the effect these adapter give, they can be quite amazing! Be prepared to spend over £2500 for the adaptor alone!
    4. And of course, remember technique when shooting like this.. these adaptors do stop down your camera, you will probably have less light to play with. When u r tracking etc, DOF is really critical and can be difficult to master at first.
    I don’t have any 35mm adapter, i try to achieve the film look by using steps 1 and 2..the result, well... its ok but often means I have to mess about with camera distances sometimes to achieve a decent DOF. Swings a roundabouts!

  • Cannot get repeatable stereo calibration

    Hello all. I am struggling to get a repeatable stereo caibration. Hopefully someone can give me some pointers. A little bit about my setup:
    I have a pair of AVT Manta GigE cameras (1292 x 964) paired with Tamron 23FM25SP lenses. The cameras are mounted on a rigid (12mm thick) aluminium plate and are currently set to be around 895mm apart.The cameras are toed in slightly so that the centres of the images intersect around 4 metres from the cameras. The cameras are securely mounted via adapter plates and bolts. They cannot move.
    I have a calibration grid along the lines of the NI example grid. Mine is 28X20 black dots spaced around 13mm apart (centre to cente) with each dot being around 5mm diameter. I am aware of the  NI guidelines on suitable calibration grids, and mine seems to be well within the recommended bounds. The grid was formed by laser printing onto A3 paper and then using spray adhesive to fis to a rigid carbon fibre panel. It is flat and doesn't deform when in use.
    So, here is my problem: when I use the calibration grid to calibrate the cameras I sometimes get a good calibration and sometimes not. When I get a good calibration and attempt to repeat exactly the same process I get a different result. What do I mean by a good calibration? When I go on to use the stereo calibration in my system which tracks a circular feature in 3D space I get good accurate measurements (well sub-mm in the cross camera axes and ~1mm depth resolution, over a ange of 600mm in each axis centred around 3000mm from the cameras. The centres of the circular features in each image lie on the same horizontal image line as expected in the rectified images for a well calibrated camera pair. When I get this 'good' calibration the distance between the cameras as returned by the 'IMAQ get binocular stereo calibration info 2' VI (the magniture of the translation vector) is around the correct distance of 895mm. However, when I perform the calibration lots of times I get quite a spread of camera separations (up to 20mm either side of correct). When I get a significant error in the camera separation the accuracy of the system degrades dramatically and the centres of the circular feature line on progressively further apart horizontal lines (there's one distance from the camera when they're on the same line, and they move apart either side of that distance).
    I have gathered a set of 10 images of the calibration target and set up a VI to use a subset of the images for the calibration process and iterate through permutations to investigate the repeatabilty. I get a similar spread of results for inter-camera distance. 
    Does anyone have a feel for whether what I'm trying to do is sensible / achievable? Any tips for repeatable calibration? For instance, should the calibration grid be at a constant distance from the cameras when it is presented at the different anglres, or should a range of distances be used? If it should be the same distance, how accurately should this distnace be maintained?
    Thanks, Chris
    Regards,
    Chris Vann
    Certified LabVIEW Developer

    Hi Christophe. Thanks for taking an interest. I am pretty sure that structured light is not relevant to the calibration stage being discussed here. Structured light is a useful technique for introducing detail to otherwise bland areas of an image to provide feature matching algorithms something to match against, but I don't see how it's relevant to calibrating against a grid of dots. Happy if someone can correct me of course...
    I have been using the NI example "Stereo Vision Example.vi" located in C:\Program Files (x86)\National Instruments\LabVIEW 2012\examples\Vision\3. Applications, and have also created my own system based upon that example. I get the same poor results with both. The path and file you suggested is not present on my machine (I'm running LV2012). Is the example you suggested the same? Maybe I should be trying it. Any ideas where I can get hold of it?
    I have been using the techniques you suggest of presenting the calibration grid at a variety of angles and ensuring good coverage of the fields of view. I have spent upwards of 20 hours experimenting with different techniques and approaches, and cannot get repeatable results. But with the Matlab-based approach from Caltech using the same techniques I get good results. I am becoming increasingly confident there is an issue in the LV implementation.
    Thanks,
    Chris
    Regards,
    Chris Vann
    Certified LabVIEW Developer

  • Reflections on 3D Text query

    Hi,
    Via the new shape from layer I am able to create reflection on sphere by loading the environment with a texture and playing around with the reflection,gloss,shine.
    However I'm having issues doing the same on 3d text (created via reppousse/text layer), the reflection on the front Inflation material is just not appearing
    How do  position the reflection?
    CAn you scale the image that you set for the reflection image (environment)?
    Thanks in advance
    Steve

    Well after playing around with this a long time I think I manage to work it out.
    I mention this just in case it might help someone else out.
    As I said once I enable reflections on the front inflation of the 3d text  (setting a image to the environment and raising the reflection value setting) I could not understand why the reflection was so blurred and pixelated no matter how much I rotated it
    The reason was by default I'm apparently looking  at my 3d object with camera well zoomed out. I went through the process of  scaling my camera back and zooming into my object with the object scaler tool.  Of course the wire frame 3d object still stays the same size on screen if you do this
    However gradually the reflected image became sharp and with reflection setting set to 100% it acted like a mirror exactly  what I expect. Horray!!!
    Now by looking at my object in the 3d view with my 3d plane visible it is hard to see how far or near the object is is relation to Camera distance and the scale of the 3d object.  Is there some indication in this view that I'm not seeing?
    Anyway hope it helps
    Steve

  • "Choppy playbook" of iMovie project

    Hi all, 
    I've spent days editing 4 hours of video to create a final cut of a live 21/2 hour show and yesterday when playing it back for the client on my MBP, it stopped and started, audio going ahead and video jumping to catch up. It was a visual disaster. The client was irritated but understanding, but I was and am so embarrassed. I called around today and was told this is called "choppy playback" and "most likely won't show up on my DVD when I burn it." Any truth to this? And any answer as to how to solve the problem? Thanks for any help you can share. I have such a headache.
    Kate

    Kate Porter wrote:
    ... tried to burn my movie with iDVD. It didn't burn both times. ...
    no, out-of-the-box, iDVD can not handle 2.5h long projects ...
    back to your original problem:
    did you present your client the movie within iMovie (=no good)?
    or an exported version? if so : what were your export settings?
    "choppy playback" usually originates in a few things:
    • too slow hardware performance, e.g. a 'crowded' harddrive, playback from an old, usb2 flash-mem-stick, etc etc
    • wrong settings in workflow concerning framerates
    • wrong app - an editor is no player
    • finally, a 'perception' problem, based upon shutter speeds, subject/camera distance, bla bla bla
    and tell us a lil' more about your hardware, esp. free space on internal harddrive? ..........

  • ISight v. Flip Ultra HD?

    I'm making short video tutorials for use on my Blog and YouTube. My first down and dirty effort was done with the iSight camera in my MBP -- recorded directly into iMovie.
    I'm hoping that some of you have had experience with both the iSight and Flip Ultra HD and will be able to tell me whether using the Flip will improve the quality of either the video or the sound.
    The Flip has a digital zoom -- and I know iGlasses add's one to iMovie, but in my test run, iGlasses degraded the video in a significant way.
    Any help is greatly appreciated.

    Hello Stephen J. Herzberg
    I use full featured camcorders, so I have never used any of the flip video cams. However, were I considering a flip, here is how I would proceed.
    Even if some flip users give you direct answers to your questions, the opinions will be theirs. You might not agree with their conclusions.
    Hopefully, some who respond here can offer links to their videos from both _their flips_ and _their iSights_ so you can compare for yourself. However, even if they do show you their videos, you may see different results in the locations or subjects or your own videos. You may want to consider testing flip yourself in the way you plan to use it.
    Find a local retailer who will allow you to test the flip model of interest to see if it meets your quality needs before you buy. Make and upload a short test in the store using the floor sample or demonstrator model.
    Then, if you have not previously done so, use your MBP to make and upload a short iSight test video in lighting and camera distance approximately similar to that you made on the flip test.
    Based on what you say you want to do, it seems to me that the most important quality consideration for your purpose is how the videos will play back over the internet. Once both test videos have "processed" and are available for viewing, you can log into your uploaded tests and view them in the same way your audience will see them. From that, you should be able to decide easily whether the flip is for you.
    EZ Jim
    G5 DP 1.8GHz w/Mac OS X (10.5.7) PowerBook 1.67GHz (10.4.11)   iBookSE 366MHz (10.3.9)  External iSight

  • Determining the apparent scale of a 3D layer

    I was reading an entry on Wikipedia ( http://en.wikipedia.org/wiki/Image_resolution ) the section on "distinguishable squares" caught my eye.
    Is there some fancy expression math trick in determining if a 3D layer is being viewed at >100% scale?  Somehow take into account the AE camera distance, FOV, the target layer's pixel dimension and scale? I think collapse transforms would have to be off.
    I'm doing a strict Z-move with a camera and my source 4K footage gets a little soft. I'd like a way to ensure its apparent visual scale doesn't go beyond 100% until its off screen.

    It could get pretty complicated, especially if you have to consider that the layer might be angled to the camera such that one corner could be much closer than the others. If you only have to consider where the anchor point is in relation to the camera's focal plane, something like this should give you the apparent scale:
    C = thisComp.layer("Camera 1");
    v1 = toWorld(anchorPoint) - C.toWorld([0,0,0]);
    v2 = C.toWorldVec([0,0,1]);
    d = dot(v1,v2);
    s = 100 * C./d
    Calculating whether any part of the layer is actually visible would add another layer of complexity.
    Dan

  • Camera calibration with 2 planar object images taken from different distance

    I have a camera calibration object printed on paper and mounted on wall. It is planar object and 2 images taken of this object from 2 different distance. What should be the approach for calibration, both images are coplanar

    I have a camera calibration object printed on paper and mounted on wall. It is planar object and 2 images taken of this object from 2 different distance. What should be the approach for calibration, both images are coplanar

  • Camera data subject distance missing in CS6

    This is another strange problem.  I noticed that recent shots with any of 3 Nikon cameras, D3, D4 and D300s lack the distance data in CS6 Bridge.  But older shots do.  I narrowed the change to about June 10, 2012.  Before this they have the subject distance and June 12 pictures and later don't.  It seems that this is the time when I configured or installed CS6 which was a free upgrade after getting CS5.1
    When I open CS5.1 Bridge though, even the most recent shots have subject distance.

    Curt Y,
    Thanks for your response.  I was aware of the thread you reference.  In fact I have implemented the excellent workaround described by Paul Riggott in that thread. 
    Nevertheless that thread fails to provide an answer to the original question:  Why is subject distance correctly displayed in Bridge 5 and that field completely absent in Bridge 6.
    In my case this is true for both native NEF files and when the same file is converted to DNG. 
    It is true for both old and recent files and for files from D700 and D800 cameras. 
    In the case of these two cameras, the Subject Distance is found under the Makers Notes but is not found under the Exif data as reported by ExifToolGUI.  This last finding confirms the observation of DLC (reply #4) in the cited thread. 
    However if one examines the advanced tab in file info, one finds the following:
    In both Bridge 5 and Bridge 6 there is an entry under Exif Properties labeled "Subject Distance"
    So the question remains: why is this datum displayed in the Camera Data (Exif) Panel of Bridge 5 but not in Bridge 6?  And what can be done to correct this glitch?
    Addendum / Correction
    I have just discovered the following:  Older (raw) files which had been previously opened with Bridge 5, regardless if they were edited or not, when now opened in Bridge 6 do show the Subject Distance in the Exif Panel.

  • Focal distance in iSight camera

    Dear apple support
    kindly advice me why there is difference in focal distance between photo and video in iSight camera in all virgin of Iphone?

    I noticed the same problem in my iphone too. It differs when you switch from photo to video!

  • Max. install distance for Speaker Track 60 Cameras to SX80 CODEC

    Just integrated a SX80 / Speaker Track 60 system and the cameras are not providing video signal, (selfview) to the integrated displays.
    Question: What's the maximum length a Non-Active HDMI cable can be, between the PHD 60 camera(s) and the SX80 CODEC?
    Will Active HDMI cables help carry the camera's video signal to distances of 50' or greater? 

    I don't know about the SpeakerTrack 60, but I know the PrecisionHD cameras, the maximum was 50 meters (50 feet) with a category 2 good quality HDMI cable.  I looked through the SpeakerTrack documentation, but didn't find any mention of maximum length for the HDMI cable, however I did find the below searching online.
    Extending the Precision60 and SpeakerTrack60 Cameras

  • Best export settings for Canon cameras: Key Frame Distance confusion

    Good evening! I have the Canon SX230 camera (for easy understanding, you could consider it as 5D Mark) and Premiere Pro CS5.5, and what I need is to export my project with the best similar to the source quality, as lossless as it possible (but not like absolute lossless oversized AVI's...) To my happiness, there are not so many export settings, but I am very puzzled with "Set Key Frame Distance" point. Okay, basically I understand what GOP is, what I don't understand is what value I should choose for the best result, if my сamera does videos with M=1/N=12? I'm choosing "1" Key Frame Distance, like all frames are I-frames, saving the original sequence, because you don't want to convert original I-frames to P-frames, so it's better to make them I-/P-/B-frames all like I-frames? Am I right at this point, is that wise, and will it be the best reasonable option, matching original M=1/N=12? Or is it not worth it? And will it fundamentally affect the file size? I really need your help on this, worked on this project for three months and really don't want to screw up all:) Thank you very much!
    sample of my sourse files:
    and this is my export settings, which I think is the best of the best, but please correct me if I'm wrong:

    Oh dear Ann, unfortunately it is not possible.
    For some unbelievable reason it is okay to mux H.264 with Dolby, but forbidden to mux H.264 with PCM. That has absolutley no sense. Anyway, that's not a problem, mkvtoolnix does a perfect job. Really need PCM stereo audio for that project. And H.264 is the best codec for Full HD project out there, MPEG is too old...

Maybe you are looking for

  • Video duration changes when imported

    Hello! I'm having a problem importing video that is over 10 minutes long into after effects cs3. okay, when I import a video that is 10 minutes or longer in duration it comes into after effects with only 2 minutes of the video. A video that is 8 or 9

  • How to install Operation X Maverick on a MacBook Pro w/ 2 accts?

    If I have an administrator account and a secondary account with parental controls on my MacBook Pro, and I want to update to Operating System X Maverick, do I need to install it in both accounts or just on the administrator account? Thanks!

  • Send Pager Information (Urgent)

    Hi all, Can anybody tell me how i can send pager information from my program. Is it possible to send a page from a workflow? all helpful answers will be appreciated. Thanks, Sonal P.S. This is urgent!

  • Why are my purchased songs?

    Why are my purchased songs not showing up on my phone under Purchased? I downloaded one song yesterday from my phone and that showed up but I purchased and downloaded 4 songs today on my computer from iTunes and now they are now showing up.. WHY/////

  • Issue regarding Full and Delta InfoSpoke

    Hi Experts, I have created 2 Infospokes Full and Delta to extract data from DSO to SQL server, the settings in both the InfoSpokes are identical except for the extraction mode. However, the delta InfoSpoke is pulling the correct number of records fro