Lack of Sharpness in 1 Second Exposures

I bought a T4i about a year ago after ten years or so of using point and shoot cameras.  Prior to that I used Canon SLR's for 30 years and was an avid nature photographer.  I planned to use this camera to take scenic photos while fly fishing and was particularly interested in having the ability to take long exposures of rivers and waterfalls.  I have been disappointed in the sharpness of many of my photos, but with some calls to tech support at Canon, have seen some improved results.
However, I have recently played around with long exposures while fishing and observed that even with my rock solid Gitzo tripod, one second exposures are not at all sharp.  I sent images to Canon tech support and they agree that the sharpness should be there, so I have sent the camera and 18-135mm lens off to the repair center.  I am curious to learn if anyone else has faced this problem.  I used to take long exposures with my AE-1 and the sharpness was superb.  I have attached two shots that I sent to Canon.  Both were taken at 40mm at a distance of about 30 ft.  The first is a 1 sec exposure at f/29 and the second 1/60 sec at F/5.6.  Based on the recommendations of tech support, I enabled long exposure noise reduction, turned off auto stabilization, locked the mirror up, and used a 2 sec delay to avoid camera shake. I did crop the pictures for the post to zoom in on the writing.  Please let me know if you have any thoughts.  Thanks.

I think you may find this helpful in explaining the issue.
http://www.luminous-landscape.com/tutorials/understanding-series/u-diffraction.shtml
Edited to add
I also think having long exposure noise reduction ON may be another contributing factor. 1 second isn't really "long".
"A skill is developed through constant practice with a passion to improve, not bought."

Similar Messages

  • Lack of sharpness in previews - Lightroom 2

    I'm hoping the collective wisdom of this forum can help me sort through this issue that is causing me a lot of problems.
    I am shooting NEFs. When I view my images in Lightroom 2 after import, there is a noticeable lack of sharpness to the images. Everything just looks slightly soft, even at a 1:1 preview. I am using the default detail setting of 25 for sharpness.
    When I view these same images in another application like Photo Mechanic, the images are much sharper. I'm thinking that maybe this is due to PM using the embedded jpg while Lightroom does not. I'm not sure.
    I am looking for suggestions on what settings people are using for optimal sharpness in Lightroom. I really like LR but right now I am finding hard to accurately evaluate my work.
    Thanks,
    Les

    >I wish the developers could give us a switch to defeat this. PS displays, or at least hints at, sharpening when zoomed out.
    They are quite set against it so I doubt it. Photoshop is really lying to you in many different ways when you judge sharpness at zoomed out levels. Of course, the only reason you see it in Photoshop is that it is a pixel editor. If you have rendered 1:1 previews, in Lightroom, you always see the effect of sharpening in the Library module when zoomed out as it is rendered from the preview image. In fact what you see there is the same when in Photoshop at power of two zoom levels (25%, 50%, etc.). So it is there, they just don't show it in Develop when you are zoomed out, where it would not tell you anything anyway.

  • Add second Exposure Adjustment

    Hey there,
    I was wondering if you could somehow add a second 'Exposure' Adjustment as you can add multiple times for example curves or so.
    Thanks.

    Hi Nikolas.
    Nope  .  Exposure Brick adjustments, per Image, are global and singular.  Once you set them (I recommend setting them so that they give you not the _the exact_ final results you want, but rather the latitude to make other adjustments that give you the final result you want), you use other adjustments to further refine the "exposure".
    See this page in the User Manual.  (Added:  Actually, that page isn't as helpful as I had expected.  Use "Exposure" to give you the broadest range of usable data.  Then use either Curves or Levels to re-map that data where you want it for the whole Image.  Then use Quick Brushes and/or additional brushed-on Curves or Levels adjustments to re-map local subsets of your global data.  At least, that what I do  .)
    HTH,
    --Kirby.
    Message was edited by: Kirby Krieger

  • Lack of sharpness in text and colors

    Recently I have noticed text and colors appear to have faded, not dramatically but discernible nonetheless. I have adjusted brightness (in displays) with no effect so I wonder if someone can suggest another route ... or is it symptomatic of other problems.

    Sorry. Resolved. Universal Access. I just hadn't found it.

  • CS4 NOT capable of sharp displays at all zoom levels

    I must have been asleep, until now, and missed the significance and importance of what follows.
    In post #11 here:
    http://forums.adobe.com/thread/375478?tstart=30
    on 19 March 2009 Chris Cox (Adobe Photoshop Engineer - his title on the old forums) said this, in a discussion regarding sharpness in CS4:
    "You can't have perfectly sharp images at all zoom levels.". Unfortunately, my experience with CS4 since its release late last year has repeatedly confirmed the correctness of this statement.
    What makes this statement so disturbing is that it contradicts an overwhelming amount of the pre- and post-release promotional advertising of CS4 by Adobe, to the effect that the OpenGL features of CS4 enable it to display sharp images at all zoom levels and magnifications. What is surprising is that this assertion has been picked up and regurgitated in commentary by other, sometimes highly experienced, Ps users (some unconnected with, but also some directly connected with, Adobe). I relied upon these representations when making my decision to purchase the upgrade from CS3 to CS4. In fact, they were my principal reason for upgrading. Without them, I would not have upgraded. Set out in numbered paragraphs 1 to 6 below is a small selection only of this material.  
    1. Watch the video "Photoshop CS4: Buy or Die" by Deke McClelland (inducted into the Photoshop Hall of Fame, according to his bio) on the new features of CS4 in a pre-release commentary to be found here:
    http://fyi.oreilly.com/2008/09/new-dekepod-deke-mcclelland-on.html
    Notice what he says about zooming with Open GL: "every zoom level is a bicubically rendered thing of beauty". That, when viewed with the zooming demonstrated, can only be meant to convey that your image will be "sharp" at all zoom levels. I'm sure he believes it too - Deke is someone who is noted for his outspoken criticism of Photoshop when he believes it to be deserved. It would seem that he must not have experimented and tested to the extent that others posting in this forum have done so.
    2. Here's another Adobe TV video from Deke McClelland:
    http://tv.adobe.com/#vi+f1584v1021
    In this video Deke discusses the "super smooth" and "very smooth" zooming of CS4 at all zoom levels achieved through the use of OpenGL. From the context of his comments about zooming to odd zoom levels like 33.33% and 52.37%, it is beyond doubt that Deke's use of the word "smooth" is intended to convey "sharp". At the conclusion of his discussion on this topic he says that, as a result of CS4's "smooth and accurate" as distinct from "choppy" (quoted words are his) rendering of images at odd zoom levels (example given in this instance was 46.67%), "I can actually soft proof sharpening as it will render for my output device".
    3. In an article by Philip Andrews at photoshopsupport.com entitled 'What's New In Adobe Photoshop CS4 - Photoshop 11 - An overview of all the new features in Adobe Photoshop CS4',
    see: http://www.photoshopsupport.com/photoshop-cs4/what-is-new-in-photoshop-cs4.html
    under the heading 'GPU powered display', this text appears :
    "Smooth Accurate Pan and Zoom functions – Unlike previous versions where certain magnification values produced less than optimal previews on screen, CS4 always presents your image crisply and accurately. Yes, this is irrespective of zoom and rotation settings and available right up to pixel level (3200%)." Now, it would be a brave soul indeed who might try to argue that "crisply and accurately" means anything other than "sharply", and certainly, not even by the wildest stretch of the imagination, could it be taken to mean "slightly blurry but smooth" - to use the further words of Chris Cox also contained in his post #11 mentioned in the initial link at the beginning of this post.
    4. PhotoshopCAFE has several videos on the new features of CS4. One by Chris Smith here:
    http://www.photoshopcafe.com/cs4/vid/CS4Video.htm
    is entitled 'GPU Viewing Options". In it, Chris says, whilst demonstrating zooming an image of a guitar: "as I zoom out or as I zoom in, notice that it looks sharp at any resolution. It used to be in Photoshop we had to be at 25, 50 , 75 (he's wrong about 75) % to get the nice sharp preview but now it shows in every magnification".
    5. Here's another statement about the sharpness of CS4 at odd zoom levels like 33.33%, but inferentially at all zoom levels. It occurs in an Adobe TV video (under the heading 'GPU Accererated Features', starting at 2 min 30 secs into the video) and is made by no less than Bryan O'Neil Hughes, Product Manager on the Photoshop team, found here:
    http://tv.adobe.com/#vi+f1556v1686
    After demonstrating zooming in and out of a bunch of documents on a desk, commenting about the type in the documents which is readily visible, he says : "everything is nice and clean and sharp".
    6. Finally, consider the Ps CS4 pdf Help file itself (both the original released with 11.0 and the revised edition dated 30 March 2009 following upon the release of the 11.0.1 update). Under the heading 'Smoother panning and zooming' on page 5, it has this to say: "Gracefully navigate to any area of an image with smoother panning and zooming. Maintain clarity as you zoom to invididual pixels, and easily edit at the highest magnification with the new Pixel Grid." The use of the word "clarity" can only mean "sharpness" in this context. Additionally, the link towards the top of page 28 of the Help file (topic of Rotate View Tool) takes you to yet another video by Deke McClelland. Remember, this is Adobe itself telling you to watch this video. 5 minutes and 40 seconds into the video he says: "Every single zoom level is fluid and smooth, meaning that Photoshop displays all pixels properly in all views which ensures more accurate still, video and 3D images as well as better painting, text and shapes.". Not much doubt that he is here talking about sharpness.
    So, as you may have concluded, I'm pretty upset about this situation. I have participated in another forum (which raised the lack of sharp rendering by CS4 on several occasions) trying to work with Adobe to overcome what I initially thought may have been only a problem with my aging (but nevertheless, just-complying) system or outdated drivers. But that exercise did not result in any sharpness issue fix, nor was one incorporated in the 11.0.1 update to CS4. And in this forum, I now read that quite a few, perhaps even many, others, with systems whose specifications not only match but well and truly exceed the minimum system requirements for OpenGL compliance with CS4, also continue to experience sharpness problems. It's no surprise, of course, given the admission we now have from Chris Cox. It seems that CS4 is incapable of producing the sharp displays at all zoom levels it was alleged to achieve. Furthermore, it is now abundently clear that, with respect to the issue of sharpness, it is irrelevant whether or not your system meets the advertised minimum OpenGL specifications required for CS4, because the OpenGl features of CS4 simply cannot produce the goods. What makes this state of affairs even more galling is that, unlike CS3 and earlier releases of Photoshop, CS4 with OpenGL activated does not even always produce sharp displays at 12.5, 25, and 50% magnifications (as one example only, see posts #4 and #13 in the initial link at the beginning of this post). It is no answer to say, and it is ridiculous to suggest (as some have done in this forum), that one should turn off OpenGL if one wishes to emulate the sharp display of images formerly available.

    Thanks, Andrew, for bringing this up.  I have seen comments and questions in different forums from several CS4 users who have had doubts about the new OpenGL display functionality and how it affects apparent sharpness at different zoom levels.  I think part of the interest/doubt has been created by the over-the-top hype that has been associated with the feature as you documented very well.
    I have been curious about it myself and honestly I didn't notice it at first but then as I read people's comments I looked a little closer and there is indeed a difference at different zoom levels.  After studying the situation a bit, here are some preliminary conclusions (and I look forward to comments and corrections):
    The "old", non-OpenGL way of display was using nearest-neighbor interpolation.
    I am using observation to come to this conclusion, using comparison of images down-sampled with nearest-neighbor and comparing them to what I see in PS with OpenGL turned off.  They look similar, if not the same.
    The "new", OpenGL way of display is using bilinear interpolation.
    I am using observation as well as some inference: The PS OpenGL preferences have an option to "force" bilinear interpolation because some graphics cards need to be told to force the use of shaders to perform the required interpolation.  This infers that the interpolation is bilinear.
    Nothing is truly "accurate" at less than 100%, regardless of the interpolation used.
    Thomas Knoll, Jeff Schewe, and others have been telling us that for a long time, particularly as a reason for not showing sharpening at less than 100% in ACR (We still want it though ).  It is just the nature of the beast of re-sampling an image from discrete pixels to discrete pixels.
    The "rule of thumb" commonly used for the "old", non-OpenGL display method to use 25%, 50%, etc. for "accurate" display was not really accurate.
    Those zoom percentages just turned out to be less bad than some of the other percentages and provided a way to achieve a sort of standard for comparing things.  Example: "If my output sharpening looks like "this" at 50% then it will look close to "that" in the actual print.
    The "new", OpenGL interpolation is certainly different and arguably better than the old interpolation method.
    This is mainly because the more sophisticated interpolation prevents drop-outs that occurred from the old nearest-neighbor approach (see my grid samples below).  With nearest-neighbor, certain details that fall into "bad" areas of the interpolated image will be eliminated.  With bilinear, those details will still be visible but with less sharpness than other details.  Accuracy with both the nearest-neighbor and bilinear interpolations will vary with zoom percentage and where the detail falls within the image.
    Since the OpenGL interpolation is different, users may need to develop new "rules of thumb" for zoom percentages they prefer when making certain judgements about an image (sharpening, for example).
    Note that anything below 100% is still not "accurate", just as it was not "accurate" before.
    As Andrew pointed out, the hype around the new OpenGL bilinear interpolation went a little overboard in a few cases and has probably led to some incorrect expectations from users.
    The reason that some users seem to notice the sharpness differences with different zooms using OpenGL and some do not (or are not bothered by it) I believe is related to the different ways that users are accustomed to using Photoshop and the resolution/size of their monitors.
    Those people who regularly work with images with fine details (pine tree needles, for example) and/or fine/extreme levels of sharpening are going to see the differences more than people who don't.  To some extent, I see this similar to people who battle with moire: they are going to have this problem more frequently if they regularly shoot screen doors and people in fine-lined shirts.   Resolution of the monitor used may also be a factor.  The size of the monitor in itself is not a factor directly but it may influence how the user uses the zoom and that may in turn have an impact on whether they notice the difference in sharpness or not.  CRT vs LCD may also play a role in noticeability.
    The notion that the new OpenGL/bilinear interpolation is sharp except at integer zoom percentages is incorrect.
    I mention this because I have seen at last one thread implying this and an Adobe employee participated who seemed to back it up.  I do not believe this is correct.  There are some integer zoom percentages that will appear less sharp than others.  It doesn't have anything to do with integers - it has to do with the interaction of the interpolation, the size of the detail, and how that detail falls into the new, interpolated pixel grid.
    Overall conclusion:
    The bilinear interpolation used in the new OpenGL display is better than the old, non-OpenGL nearest-neighbor method but it is not perfect.  I suspect actually, that there is no "perfect" way of "accurately" producing discrete pixels at less than 100%.  It is just a matter of using more sophisticated interpolation techniques as computer processing power allows and adapting higher-resolution displays as that technology allows.  When I think about it, that appears to be just what Adobe is doing.
    Some sample comparisons:
    I am attaching some sample comparisons of nearest-neighbor and bilinear interpolation.  One is of a simple grid made up of 1 pixel wide lines.  The other is of an image of a squirrel.  You might find them interesting.  In particular, check out the following:
    Make sure you are viewing the Jpegs at 100%, otherwise you are applying interpolation onto interpolation.
    Notice how in the grid, a 50% down-sample using nearest-neighbor produces no grid at all!
    Notice how the 66.67% drops out some lines altogether in the nearest-neighbor version and these same lines appear less sharp than others in the bilinear version.
    Notice how nearest-neighbor favors sharp edges.  It isn't accurate but it's sharp.
    On the squirrel image, note how the image is generally more consistent between zooms for the bilinear versions.  There are differences in sharpness though at different zoom percentages for bilinear, though.  I just didn't include enough samples to show that clearly here.  You can see this yourself by comparing results of zooms a few percentages apart.
    Well, I hope that was somewhat helpful.  Comments and corrections are welcomed.

  • AVCHD Loss of sharpness

    I have been combing the forum looking for an answer. Is it normal to see a lack of sharpness when importing avchd into imovie 09 from a sony. I have tried exporting in loads of different formats and the quality is the same as just after import. I am importing using the full HD size option. It just is not as sharp as i would have hoped. Will FCE be any better?

    I was just over at the HiDef Forum and came across others inquiring about this issue. What the posters there said (there were several and they were all consistent with each other) is that iMovie converts AVCHD to another format (some specified AIC). Two things happen when this is done. First, the file size expands ca. 10x, and second there is some loss of video quality. One poster provided a link to an Apple web page that (the poster claimed) acknowledged the file size issue. I did not verify the link to confirm that, though.
    Several of them recommended the Sony software provided with the camera running on a PC.

  • Exposure problem

    I seems I am doing everything right but my new 70D is acting very strange.  I have it set up on a tripod to take photos of a waterfall.  It is not sunny so I don't use the neutral desity filter.  I have it set on TV and I change the setting from 1 to 2 seconds.  The first shot at 1 sec chooses an aperture of 7.1, the second and third shots of 1.6 and 2 it chooses an aperture of 20 or above and the shots come out very dark, and often blurry.  And then when I shoot hand held on P, it overexposes most, but not all, of the shots.  Why the inconsistancy??

    The aperture should narrow by one stop if you double the exposure length. In your example it moved more than one stop, more than 3 stops actually. Something in the scene was messing with your automatic metering. Backlighting from the sky? Weird reflections off of the water?
    You are dealing with a tripod already. I would just shoot manual and avoid the problem rather than playing around with exposure comp or switching between spot metering and scene metering etc.
    Personally I would not use as high an ISO as 500 for a long exposure on a crop sensor unless you were really set on getting a 1 second exposure here rather than a 4 second one for artistic reasons. I hate noise and lack of detail perhaps more than is warranted but if I could use ISO 100 or 200 by lengthening exposure a second or two I would.
    Scott
    Canon 6D, Canon T3i, EF 70-200mm L f/2.8 IS mk2; EF 24-105 f/4 L; EF-S 17-55mm f/2.8 IS; EF 85mm f/1.8; Sigma 35mm f/1.4 "Art"; EF 1.4x extender mk. 3; 3x Phottix Mitros+ speedlites
    Why do so many people say "fer-tographer"? Do they take "fertographs"?

  • Exposure to the right results in different TRC than normal exposure

    Exposure to the right is advocated by most experts to improve tonality and dynamic range. On the Luminous Landscape a photographer noted that ETTR all the way to the right followed by negative exposure correction in ACR produces a different image than is produced by normal exposure, and that he preferred the latter image.
    Luminous Landscape Thread
    Most responders to this post postulated that, since ACR is operating on linear data, underexposure by 1 EV followed by a 1 EV boost in ACR would produce the same results.
    I had some exposures of a Stouffer step wedge. The first was exposed so that step 1 has a pixel value of 250 when converted with ACR at default settings into aRGB. This is exposed to the right as far as possible. A second exposure placed the same step at 221, and this step was brought back to 250 in ACR, which required an exposure compensation of +1.05 EV.
    If you compare the resultant images in Photoshop using the difference blending mode, the differences too dark to make out on the screen, but can be detected with the eye dropper. In this image, normal exposure to the right is on top, and the difference between normal exposure and underexposure followed by a boost of 1 EV in ACR is shown on the bottom.
    The different resulting tone response curves are better shown by Imatest plots of the two images. As is evident the TRCs are different, contrary to my expectation. Comments are invited.

    The ETTR Myth
    ETTR is short for expose to the right. Some folks have promoted it as a replacement for traditional exposure metering. The premise is that you can validate camera metering by simply reading the histogram in the cameras preview window.
    Unfortunately, it is based on some basic misunderstandings about digital photographic technology. The first misunderstanding is the premise that each bit level in a digitally encoded image represents an exposure stop. The second misunderstanding is the premise that all digital cameras capture light in a perfectly linear fashion. The third misunderstanding is the premise that the histogram represents the raw image data captured by the camera. I will briefly address each of these.
    Any correlation between exposure stops and digital bit levels can only be accidental at best. The total exposure range in a scene or an image is correctly known as the dynamic range. The dynamic range of digital cameras is wider than most folks assumes and usually equal to or better than film or paper. It can be defined in terms of tone density, decibels, or exposure stops. It is a function of the optics and sensor electronics in the camera. The few cases where an accurate range is provided by the vendors, it varies from 8 to 12 f/stops.
    The image data is converted from analog measurements by the analog/digital (A/D) circuits early in the capture. This can wind up as an 8-bit, 12-bit, 14-bit, or even 16-bit digital value depending on the camera and its user settings. It is simply a number that has been digitized. Any correlation between bits and exposure levels is pure speculation, end of subject.
    Second, the digital capture of light is not strictly linear. It is true that the silicon sensor itself will capture light in a very linear fashion. But this ignores reciprocity at the toe and heel of the extremes, the quantum efficiency of the substrate, and most importantly it ignores the optical filters in front of the sensor. If the color filter array were linear it would be impossible to reconstruct colors. And these are not the only optical filters in your camera. Then, the A/D circuits have gain controls based on the current ISO setting. And some A/D circuits perform some pre-processing based on the illuminant color temperature (white balance) and limited noise reduction based on the ISO setting. The point is that there are many steps in the pipeline that can introduce non-linearity.
    Finally, the image in the preview window has been color rendered and re-sampled down to a small size. This is the data shown in the histogram. The camera can capture all colors in the spectrum, but the rendered image is limited to the gamut of an RGB color space. So, in addition to exposure clipping the histogram will include gamut clipping. This is also true for the blinking highlight and shadow tools. This might imply an exposure problem when none exists. There is no practical way to map all the data in a raw image into a histogram that you could use effectively in the preview window.
    If you capture an image of a gray scale chart that fits within the dynamic range of the camera, at the right exposure, you can create a linear graph of the raw data. But if you underexpose or overexpose this same image, the graph will not be linear and it is unlikely that software will be able to restore true linearity. End of subject.
    If you typically shoot JPG format, the histogram will accurately represent the image data. But clipping can still be from either gamut or exposure limits. If you typically shoot RAW format, the cameras histogram is only an approximation of what the final rendered image might look like. There is a significant amount of latitude provided by the RAW image editor. This is probably why you are shooting RAW in the first place.
    So, in closing, I am not saying that histograms are bad. They are part of a wonderful toolkit of digital image processing tools. I am saying ETTR is not a replacement for exposure metering. If you understand what the tone and color range of the scene is, you can evaluate the histogram much better. And if you master traditional photographic metering, you will capture it more accurately more often.
    I hope this clears up my previous statements on this subject. And I hope it explains why I think ETTR and linear capture are based more on technical theology than on technical fact.
    Cheers, Rags :-)

  • 5D mark III not recording picture after long exposure

    I've been trying to do star trail pictures the last few days. Every time I do, the batteries are almost dead by the time I turn it off, which isn't surprising, but there's no picture. I have a battery grip so I should have twice as much battery power as just one battery. Is there just not enough battery to record a picture after 6+ hours? I don't thing that's it because I can still press the playback button to look at the pictures to see there's no new picture. Why is this happening

    Ahh... star trails aren't made in a single exposure.  If you do that, the sky will almost certainly wash out and become mud due to light pollution.
    It's normally a LOT of 30 second exposures that you merge together.
    Here's a video:  http://www.youtube.com/watch?v=V6ypRbPzoPM
    Incidentally, Canon makes an AC adapter ACK-E6 which will let the camera run off AC power (if you have AC power available where you plan to shoot.).  This is how I run my 60Da (which takes the same adapter as the 5D III) when doing astro-photography (which typically means the camera is taking frames for hours and hours and would normally just kill the batteries.
    Tim Campbell
    5D II, 5D III, 60Da

  • 6D vs 5d-mk3 for long exposure high ISO

    Some on line info of unkown accuracy shows the 6D has better long exposure high ISO performance. This is known as amp glow. 
    The intended usage would be for astro phorotgraphy or night photgraphy where 30 second exposures at ISO1600 and up are required.
    I already have a 5D-mk3 comming for evaluation where I will compare it to my 1DS-mk3. But I thought this is a good question for this forum. It seems strange that the lower cost camera would have supperior performance. 
    Anyone have personal experince or a technical explanation? Or is the information incorrect?
    The same info also shows a vast improvement over the 1DS-mk3 and 5D-mk2

    I know this topic is long dead, but somehow I was searching more comparisons at night shooting scenario between 6d vs 5dm3 and came across that topic (also in this forums here).
    I think that below article comparing 6d to 5dm3 (and later also to 5dm2) at iso 3200 & 6400 with 15 sec & 30 sec exposure time, shows pretty good results in favor of 6D. the maker of that comparison also intended to use 6D camera for astrophotography.
    http://petapixel.com/2012/12/13/canon-6d-and-5dmk3-noise-comparison-for-high-iso-long-exposures/
    hope it helps.

  • EOS Utility, wifi, the 70D and long exposure

    Greetings,
    Can I use the wifi function on the 70D and EOS Utility function for take long exposures (1-5mins)  without any cables on the camera. Meaning, can the EOS utility accept or set exposure times above 30 seconds via wifi and the utility software?
    My desire would be to have no wires on the camera body nor to my laptop, yet have greater than 30 seconds exposure control. 
    Regards,
    Brian   

    John Hoffman
    Conway, NH
    1D Mark IV, Rebel T5i, Pixma PRO-100, MX472

  • Fix double exposure

    I have a photo of a friend and I from a few years ago and it would be a really great picture, except it's double exposed and the second exposure is super light and covers our faces.  I was hoping someone had some tips on how to fix it.
    Thanks!

    Sorry, but this
    forum is for discussions on the forums themselves. You may receive
    useful suggestions if you repost your question in the Photoshop forum
    s
    http://forums.adobe.com/community/photoshop
    or related ones.
    Extra line breaks kindly provided by the software.

  • Full-screen (spacebar) preview quality testing

    [For background story, please read http://forums.adobe.com/thread/1056763 but be warned, it's very l-o-n-g!]
    In brief: some people have noted that Bridge full-screen (spacebar) previews (FSPs) don't accurately reflect the sharpness of a photograph. Sometimes this can be explained by individual configuration problems, but it's clear that this is a common issue amongst people using Bridge to assess/score photograph sharpness, without having to build/examine 100% previews for every image.
    [It's worth noting that one common reason why FSPs aren't very sharp is because the Bridge advanced preference "Generate Monitor-Size Previews" hasn't been ticked, as this produces a higher resolution image cache.  Another cause of very fuzzy previews is random and unexplained, but can usually be solved by restarting Bridge and/or clearing the cache for the selection.]
    This discussion concerns the lack of sharpness seen only in FSPs.  It can be described as "a subtle but significant loss of detail and sharpness, similar to a slightly out of focus photograph"; imagine a photo with a little bit of blur filter, or a Photoshop PSD at a non-standard zoom setting.  This "softening" of the image is caused by Bridge asking the graphics processor to resize the image cache to fit the display.  If you select the Bridge advanced preference "Use Software Rendering", you can improve a poor FSP slightly, at the expense of speed, by bypassing the graphics processor.
    The test
    Visit this web page and download the last image ("2362x3543 pixel, 4.5 Mb") to your computer.
    Browse to this image in Bridge, and view it full-screen by pressing Spacebar.  Take a screen capture, and save it as a TIFF or PSD.
    Adjust your slideshow settings (Ctrl/Cmd-Shift-L), picking "Scaled to Fill", then click on "Play".  Save the screen capture, as above.
    You now have two screen captures: one FSP, and one cache JPEG reference shot.  Examine them side by side at 100%, or layer them in Photoshop and use the hide layer button to flick between images.  Pay particular attention to the two left-hand photos, the sharpness check text, and the converging lines.
    Make a note of your computer's operating system, graphics processor and driver version, as well as your largest display's pixel dimensions.
    Post this information below, together with high quality (10) JPEGs of both screen captures, labelled FSP and REF, and any observations, so we can all see.

    OK, it usually takes me a while to let the penny drop, especially when it comes to maths...
    I also am busy with the transition of my new Mac pro but with al this here are my results. I include several screenshots but due to upload limit of 2 MB per image in here I downsized the original screenshots a lot, but hopefully it will be clear.
    For full screen screenshots I have the asked FSP and REF but also the 100% preview in Bridge with space bar and click. Don't know what your file size is but using EOS 1Dx with 18 MP CR2 files (converted to DNG) it does take me about 1,5 - 2 seconds for both loupe and FSP to build a 100 % preview, and I seem to recall this was not very different behavior on my previous (6 year old) Mac Pro.
    You are right (of course... :-) ) regarding the difference between FSP and REF, when studying closely there is a significant detail difference between the FSP and the REF. However, only the 100 % preview matches the original jpeg. The FSP file is on closer look not so good with details but the REF file is only slightly better, both are not correct and therefor the 100 % is still needed.
    Here is the FSP screenshot:
    and here the REF screenshot:
    also the 100 % preview in full screen screenshot:
    and finally a composed file with details from original, 100 % REF and FSP:
    As said before, at first sight I can't spot significant difference between all options and the full screen (as the preview panel HQ preview) let's me spot the vast majority of unsharpness issues, hence my multiple rounds of sorting and incases of doubt the 100 % option.
    So while your theory is correct I'm afraid I  (still) doubt the usefulness of this all. If neither the FSP and the REF (although the latter does show a bit better result) can match the results of the original but the 100 % does it well I don't see an easy solution for improvement.
    I agree with the quality from the screenshots Curt provided, but Curt also uses the embedded thumbnail instead of HQ preview option. Depending on his needs and hard ware availability it would be nice to see new results with the HQ and monitor sized previews options enabled.
    regards
    Omke

  • Yosemite - Opinion - Difficulties with installation

    The following is my personal opinion after having installed OS X 10.10 aka. Yosemite. I hope this is the right community to post this. I also had a little trouble with the installation. Perchance this'll help other people deal with similar dificulties.
    Installation froze at the last stage. It said "about one minute" for more than half an hour. I eventually powered the Mac down, and after reboot installation would start from the beginning. Luckily, it didn't freeze again. I added some information from the log files at the end of this post, in case you're interested.
    Folder icons are much brighter. On a white background (the default) they look like translucent and lack a sharp silhouette. The system font is no longer Lucida Grande but Helvetica Neue. This looks nice in the dock, but Lucida Grande definately looked better in the menu bar. The trashcan no longer looks like a basket but like a cheap rubber cup. It is so white, it sticks out quite ugly from the Dock. The hard drive icon looks like rubber too and lacks of a shadow beneath it, which makes it much less readable over certain desktop backgrounds. Mounted disk images, on the other hand, look a lot like SSDs, which is okay but pointless.
    They removed the full-screen icon from window titles and redefined the green button to the left as the new fullscreen button. Windows that don't support fullscreen such as Finder windows will still maximize when the green button is clicked. To maximize a window that supports fullscreen, you can press and hold alt while clicking the green button. This may be something that needs some getting used to, but I like the new behaviour so far.
    They removed the redundant Dock submenu from the Apple menu. There is still the menu over the Dock's separator line (the one that separates running applications from files and the trashcan), and of course there's the system settings panel. I like this change, because the Apple menu has looked cluttered up ever since OS 9.
    Mavericks had a bug that you could open the collapsed side bar when you were trying to resize the window width by grabbing the window's left border. They seems to have fixed this. Or maybe I just forgot how to reproduce it.
    (0) Spotlight now searches both your hard drives and the Internet. I find the latter both confusing and distracting, but luckily you can turn off this behaviour in the System settings.
    (0) Spotlight now opens a window as soon as you click its magnifier icon. The window shows previews of the matches, which may turn out a nice feature. The window is non-draggable and has a fixed size, which is unfortunate. Moreover, it would be closed as soon as you click somewhere else. That's not as bad as it sounds, because you can see the previous results when you click the magnifier again, but still it feels pretty arbitrary.
    Before Mavericks came, you could create an alias of a window's folder by pressing and holding command and option and dragging the icon that's shown in the window title bar. Since Mavericks, this no longer works, and Yosemite does not fix this. Luckily, command-clicking the window title to see the file path still works, as well as dragging the icon to move the folder. (I use the latter a lot to copy the path of a Finder window to an open Terminal.)
    There is now a calculator in the notification centre. Yet another one? The system now comes with three calculators: one in the notification centre, one in the Dashboard, one in the Applications folder. The latter is the only one that can do more than just the basic arithmetic operations. As a programmer, I use both the scientific and the programmer's view quite a lot. The version that came with Mavericks definately looked better though, but all the functions seem to be still there, so I don't really care.
    File tags (colours) are still as they were introduced in Mavericks. To switch tags, you still have to remove the tag, then go into the menu again to set the new tag. Tag colours are still fairly small dots, which makes them hard to spot in certain views.
    Finder windows still sometimes forget the settings I do to them, and the option to disable both the toolbar and the sidebar still can't be made the default and has to be reassigned to folders again and again if you prefer this setting. This behaviour was introduced quite some time ago, I think it must have been 10.8. I hoped Yosemite would perhaps finally fix this, but I was mistaken.
    Scrollbars still haven't got their arrows back. The arrows were removed with 10.8, I think, and I still badly miss them from time to time. Try scrolling in a large document and you'll know what I mean!
    Yosemite still respects my ApplePersistence=NO setting. This is a flag that may be set to NO using the Terminal and the command "defaults" as root, which will turn off the strange and pointless ApplePersistent features they introduced with Mavericks. After setting it to NO, applications will never be automatically reopened at launch and Open/Save/SaveAs work again as they should. I'm very glad ApplePersistence=NO still works with Yosemite!
    The text of window titles of some older applications as Microsoft Word 2008 or Textwrangler 4.5.3 are now drawn in a lighter grey than Finder windows and system applications, which makes these windows look as if they were inactive.
    Appended: Some errors during installation.
    Nothing got logged in system.log during the actual installation. That's probably normal, because system boots from a disk image when installing. So, no error messages here. The following is an extract from system.log when booting after I had done a forced shutdown as the installation progress got stuck:
    Oct 17 15:06:22 localhost launchd[1]: assertion failed: 14A389: libsystem_stats.dylib + 3740 [1DB04436-5974-3F16-86CC-5FF5F390339C]: 0x26
    Oct 17 15:06:22 localhost com.apple.xpc.launchd[1]: assertion failed: 14A389: launchd + 193355 [55B9FF23-B298-321A-B776-CF7676586C04]: 0x2
    Oct 17 15:06:22 localhost com.apple.xpc.launchd[1]: assertion failed: 14A389: launchd + 193355 [55B9FF23-B298-321A-B776-CF7676586C04]: 0x2
    Oct 17 15:06:39 localhost com.apple.xpc.launchd[1] (com.apple.alf): The HideUntilCheckIn property is an architectural performance issue. Please transition away from it.
    Oct 17 15:06:39 localhost com.apple.xpc.launchd[1] (com.apple.autofsd): This service is defined to be constantly running and is inherently inefficient.
    Oct 17 15:06:40 karaboudjan3 opendirectoryd[50]: BUG in libdispatch: 14A389 - 2004 - 0x5
    Oct 17 15:06:40 karaboudjan3 com.apple.xpc.launchd[1] (com.apple.xpc.launchd.domain.system): Service "com.apple.ManagedClient.startup" tried to hijack endpoint "com.apple.ManagedClient.age
    Oct 17 15:06:41 karaboudjan3 kernel[0]: Notice - new kext com.apple.iokit.IOHIDSystem, v2.0 matches prelinked kext but can't determine if executables are the same (no UUIDs).
    Oct 17 15:06:42 karaboudjan3 kernel[0]: Previous shutdown cause: 3
    The install.log shows entries but with an incorrect time. I assume that's so because the disk image is set up for a different time zone. Anyway, here are a few entries:
    Oct 17 05:09:10 localhost opendirectoryd[123]: opendirectoryd (build 382.0) launched - installer mode
    Oct 17 05:09:55 Mac-Pro.home OSInstaller[410]: Repairing file system.
    Oct 17 05:10:32 Mac-Pro.home OSInstaller[410]: The volume MacintoshSSD appears to be OK.
    Oct 17 05:10:32 Mac-Pro.home OSInstaller[410]: Repair completed successfully.
    Oct 17 05:10:32 Mac-Pro.home OSInstaller[410]: Container dmg is missing universal diagnostics. This install will continue attempting to preserve existing diagnostic software...
    Oct 17 05:10:32 Mac-Pro.home storagekitd[411]: Install recovery system operation began
    Oct 17 05:12:46 Mac-Pro.home OSInstaller[410]: Free space is enough, will continue install.
    Oct 17 05:22:40 Mac-Pro.home OSInstaller[410]: postinstall: chmod: Failed to clear ACL on file /Volumes/MacintoshSSD/System/Library/User Template/Non_localized/Downloads/About Downloads.lp
    Oct 17 05:22:41 Mac-Pro.home OSInstaller[410]: postinstall: 2014-10-17 05:22:41.014 update_automator_cache[1673:74190] Error finding Automator.app by bundle identifier. Not calling LS for
    Oct 17 05:22:41 Mac-Pro.home OSInstaller[410]: postinstall: 2014-10-17 05:22 update_automator_cache[1673] (FSEvents.framework) FSEventStreamStart: ERROR: FSEvents_connect() => Unknown serv
    Oct 17 05:24:06 Mac-Pro.home OSInstaller[410]: PackageKit: 679.3s elapsed install time
    Oct 17 05:24:06 Mac-Pro.home OSInstaller[410]: Starting post-upgrade migration /Volumes/MacintoshSSD/Recovered Items -> /Volumes/MacintoshSSD
    Oct 17 05:24:14 Mac-Pro.home OSInstaller[410]: Starting Migration With Request :
    Oct 17 05:24:14 Mac-Pro.home OSInstaller[410]: UID Translation: Did not find existing user record on target system with name "pamberg"
    Oct 17 05:24:14 Mac-Pro.home OSInstaller[410]: UID Translation: Did not find existing user record on target system with name "admin"
    Oct 17 05:24:14 Mac-Pro.home OSInstaller[410]: Attempt to use XPC with a MachService that has HideUntilCheckIn set. This will result in unpredictable behavior: com.apple.backupd.status.xpc
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Process:               backupd [2592]
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Path:                  /System/Library/CoreServices/backupd.bundle/Contents/Resources/backupd
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Identifier:            backupd
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Version:               ???
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Code Type:             X86-64 (Native)
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Parent Process:        launchd [1]
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Responsible:           backupd [2592]
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: User ID:               0
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Date/Time:             2014-10-17 05:24:14.593 -0700
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: OS Version:            Mac OS X 10.10 (14A389)
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Report Version:        11
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Anonymous UUID:        65A99E21-C61E-462A-88D2-0BB4CE28E1EF
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Time Awake Since Boot: 920 seconds
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Crashed Thread:        0
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Exception Type:        EXC_BREAKPOINT (SIGTRAP)
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Exception Codes:       0x0000000000000002, 0x0000000000000000
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Application Specific Information:
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: dyld: launch, loading dependent libraries
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Dyld Error Message:
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]:   Library not loaded: /System/Library/PrivateFrameworks/TimeMachine.framework/Versions/A/TimeMachine
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]:   Referenced from: /System/Library/CoreServices/backupd.bundle/Contents/Resources/backupd
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]:   Reason: image not found
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: 
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]: Binary Images:
    Oct 17 05:24:15 Mac-Pro.home ReportCrash[2595]:     0x7fff60097000 -     0x7fff600cd837  dyld (353.2.1) <4696A982-1500-34EC-9777-1EF7A03E2659> /usr/lib/dyld
    28 megs of error messages follow...

    Think you wanted your reply to be to pjonesCET1
    But, I will give an opinion on your post --
    You want to use Microsoft Office because of the robust nature of its elements.    Apples Iwork and prior to that Apple works - is at the level of Microsoft works - an stripped down piece of software that meets the need of many people.    Apple and Microsoft agreed to have a version of Microsoft office that could run on Apple devices. 
    The of problems between versions of Maverick and Yosemite may have to to with apple releasing Yosemite before it was fully tested with Microsoft office - or the bug has been there awhile and Yosemite just brought it to the surface.
    In general - when new level of software comes out - its not a complete re-write.   Commented out code that was replaced with new code is removed, along with indicators for new code and history of what the problem is are removed - giving clean code.   If the language has new structure, the code may be passed through a converted that, depending on detail reformats code into new statement structure.  
    Sometimes bugs are added because a certified programmer just can't read existing code even their own - or just ignore standards and use a new instruction that is incompatible with existing code.
    Microsoft does issue patches for existing software - so they may do that for your problem - but some problems can take a while to figure out just where they went wrong.

  • Full screen preview and slideshow are blurry / fuzzy

    Hi folks-
    I just upgraded from CS2 to CS4, in part to improve efficiency of selecting photos - I wanted the zoom and loupe features. Unfortunately a very basic function has been compromised and I'm hoping there's a solution: In both full-screen preview and slideshow the full-screen image is notably blurry. This is a problem for two reasons: 1. I can't judge pictures accurately by just flipping through them in full-screen mode (without zooming to 100% to see if they are well-focused). 2. I can't use Slideshow for, well, slideshows. I've had to resort to keeping another app (like Windows Picture Viewer) open just to be able to quickly flip through and evaluate images full-screen or to show slideshows.
    I've tried setting all the options related to image quality (they mostly have to do with caching thumbnails and speeding 100% viewing); I've tried purging the cache files. Nothing helps.
    Any ideas?  Thanks!

    It seems to me that there is something wrong with it.
    I am trying Bridge CS4 as a viewer for my JPEG files.
    I also noticed that the previews were fuzzy. I activated the high-quality previews as well as the option to generate the previews at screen size.
    Here's what I noticed. I went to check out a specific photo that is quite sharp with a number of straight lines, angles, details, where it's very easy to detect a lack of sharpness. That photo immediately looked fuzzy to me, which didn't jive with my memory of it. I quickly opened it in other viewers, which displayed it as intended with all the expected sharpness.
    So, I went through the settings to see where the problem could be and I immediately thought of checking the box to get software previews instead of hardware, restarted the app, purged the cache, tried again. There my photo had recovered its sharpness, or at least most of it. I thought I found the culprit, but not quite.
    I went to browse other photos and found one in particular with a lot of fine detail (hair) and that photo also didn't display all the sharpness that it should have (I was still using software mode). The difference was very slight and to make sure of it it required A/B comparison between Bridge and PS, but there was a difference.
    Now I tried the opposite. I went back to hardware rendering mode, restarted, purged the cache, tried again. That second photo looked alright in preview, but the previous photo looked fuzzy again.
    So, it doesn't work the same way for each photo...
    What's the main difference between the first photo, obviously fuzzy in hardware mode, and the second one, alright in hardware mode but slightly fuzzier in software? The ratio. The first one is a vertical photo, the second one is horizontal (close to square).
    I quickly checked other vertical photos and they showed the same behavior...
    Now staying in hardware mode, I noticed something else. When I preview the vertical photo and click on it, or double click, it quickly jumps, switching between 2 versions of the photo: the one displayed originally (fuzzy) and another one, with a very slightly different size, which is sharp. During that manipulation it also briefly displays "100%" at the top of the screen. It doesn't stick when you click and immediately switches back, so it looks like a glitch.
    Now in software mode, I tried the same, this time you can click once on the photo (which is originally displayed *almost* sharp). When you click once, it displays "100%" but also displays a very slightly smaller version that is clearly fuzzy. Clicking again goes back to the other, less fuzzy one, but doesn't display "100%" this time.
    Doing the same test on horizontal photos shows no change that I can see.
    So, it seems to me that in the preview mode, Bridge has trouble displaying some photos (vertical ones?) in their actual resolution (100%)
    and since there is a slight resizing going on, it brings fuzziness. This doesn't explain the very slight fuzziness seen on horizontal photos in software mode.
    Now I'll mention, for those still awake, that I saw no fuzziness on the same vertical photos when I go to Review Mode with *hardware* rendering. Photos there are sharp. Now if I go to the review mode with software rendering enabled, vertical photos are not 100% sharp (akin to the small difference I mentioned earlier while in software mode), while they are sharper in preview.
    To sum up, it seems that the only configuration that gives me consistently 100% sharp (=as they were meant to be after post-processing) photos no matter what their orientation, is when I stay in hardware rendering mode and use the Review Mode, instead of preview.
    This all really smells like a bug.
    For info, I am using Windows 7 and an ATI HD4850 card, and this only about processed JPEG files, not RAW files.
    Perhaps the preview is really meant as a quick preview and not a tool to view final photos at 100% accuracy...

Maybe you are looking for