PS display of duotones. is it accurate?

PS CS6. I'm setting up many B&W photos as PMS cool gray 9 duotones using PS supplied Cool Gray 9 bl 1 curves. When I convert from grayscale there is a noticeable decrease in contrast, mostly light tones darkening. Am I seeing what I'm going to get or is the PS duotone display not quite accurate?
My monitor is calibrated and displays color accurately.
Thanks

I have also found that personal taste and image content are better served when the predefined curves provided in Photoshop are modified.   
For what it's worth, this is a copy of a Tech Sheet I wrote for my students some time back.
(The gray scale ring is provided as a hint to each channel’s tone curve.)
1. B&W rendition.  2. Final Duotone     
Duotones for Press
Duotone is a technique that has been applied by letterpress printers and offset lithographers for more than a century and still flourishes today. It is made on press by printing – one over the other – two halftone images of the same original art. So much for absolutes: other characteristics associated with the duotone are present “usually.”
* Usually the original art is monochromatic (a black & white photograph, for example)
* Usually both halftones have the same screen fineness (150 lpi, for example) but that is not a must
* Usually a conventional dot screen is used, but a times a special screen replaces one or both halftone screens to produce a mezzotint or similar pattern.
* Usually the tonal scale of each halftone is different, but that depends on the effect desired.
* Usually each halftone in the pair is printed in a different color, but that, too, depends on the effect desired.
* Usually at least one of the ink colors is black, but it is not mandatory.
It becomes readily apparent that there is no such thing as a duotone but rather an almost infinite variety of duotone renderings.  They are more budget-friendly than four-color printing. In addition, because of the limited density of a single layer of ink, this added ink layer often produces a more aesthetically pleasing result than a single color reproduction. When reproducing black & white photographs this enables the lithographer – using either two blacks or a black and a gray – to more closely approximate the density range and contrast of the original print.
The traditional graphic arts camera and halftone screen used for creating duotones are rarely seen these days. They have been replaced by the scanner, Photoshop, and the imagesetter or platesetter.
Photoshop, working with RGB pixels, attempts to approximate the duotone effect and, with a few additional commands, prepare a file with channels containing the tonal range of what will become each printing plate’s image. Moreover, Photoshop offers pre-made duotone options that may be applied to your images. Best of all, Photoshop provides the tools for you to make your own duotones from scratch.    
Ink Color Designation: The Pantone Matching System
It is customary when working with duotones to specify ink colors in terms of their Pantone Matching System (PMS) numbers. PMS printed swatch books are readily available. In addition, you may see and designate PMS colors within Photoshop. To choose a color, double click on the foreground color in the Tools panel and from the menu presented, click on Color Libraries. In the Book field choose Pantone Solid Coated.
If you know the PMS number for the color you intend to use, rapidly type that number even though there is no field to enter it in. The color, surrounded by a black border, will be highlighted in the strip of colors at the left. Click on Picker and you will be returned to your normal Color Picker array of colors. Click OK and the designated PMS color will appear in the Foreground Color box.
The system also works in reverse. With your chosen color appearing as the Foreground Color, double click on the swatch and when the Color Picker appears, click on Color Libraries. With the Book field set to Pantone Solid Color, the Pantone swatch will be highlighted along with and its PMS number.    
The following covers:
     How to apply Photoshop pre-made duotone versions.
     How to create your own duotone.
     How to prepare the Photoshop file for commercial printing.
How to apply Photoshop pre-made duotone versions
  1. Convert the image to Grayscale via Image > Mode > Grayscale. You will notice that Duotone is grayed out – not accessible. Once the image is in Grayscale, it will become available.
  2. Choose Duotone via Image > Mode > Duotone.
  3. In the Type field, choose Duotone. In the Preset field at the top of Duotone Options, click the double arrow and from its drop down menu choose a PMS color from those offered. You may find more than one option for the same PMS designation. That indicates you have a choice in how prominent you want the color to be in the duotone. Go to the top of the list and view all the versions of PMS 144, for example, by clicking on each in turn.
  4. You may also alter the tonal curve of a color by clicking on the curve symbol adjacent to one of the colors and bending the curve to change the prominence of the color in the image.
  5, Click OK. You will notice that the duotone appears as a single channel in the Channels panel.
  Note:  The duotone, placed as a layer on an RGB image, is converted to RGB.
How to create your own duotone.
Often, the PMS color or colors you would like to use for a duotone do not appear in the premade duotone list of the Preset field. If so:
1. Choose one of the duotone styles from the list, as described above.
2. Click on the color swatch adjacent to Ink 1 or Ink 2.
3. When the Color Libraries menu appears, key in its PMS number.
4. You may also alter the tonal curve of a color by clicking on the curve symbol adjacent to one of the colors and bend the curve to change the prominence of the color in the image.
5, Click OK. You will notice that the duotone appears as a single channel in the Channels panel.
How to prepare the Photoshop file for commercial printing.
The Channels panel currently shows the duotone in a single channel. The next steps, made by you or your lithographer, will separate the colors. If you prefer to have the lithographer do the job, save the file in Photoshop (psd) format.
Otherwise:
1. Image > Mode > Multichannel (Note the two channels in the Channel panel.)
2. File > Save as…  In the Format field choose Photoshop DCS 2.0
A reminder: File resolution in the printing size of the image should be one-and-a-half to two times the intended halftone screen fineness.

Similar Messages

  • Why are thunderbolt display camera colors are not accurate?

    The colors while using the camera are very purple.  Using facetim when I switch to the MBP camera, the colors are fine.  This is a new display and because I did not discover the issue until after 15 days, apple will not exchange it.  Any suggestions will be appreciated. 

    I had a similar issue with an iSight G5 about a year ago. The colors began to go funky on the screen and I ran every test I could think of. The situation gradually worsened and I finally put the iMac to rest in a back room in my basement. The hard drive still works and the information is accessible on it and I have retrieved just about everything I can think of from it. The computer didn't die, but the monitor did. An iMac without a working monitor is kind of useless.
    I'm guessing your iMac is fairly old - by today's standards anyway - and even if the monitor issue can be resolved, it may not be worth the cost. You would have to find that out on your own at an Apple Store or a Mac repair shop.
    Maybe it can be repaired .... and this is just my opinion .... but I would find a nice recycling place to lay the Mac to rest and buy a new one.

  • CS4 NOT capable of sharp displays at all zoom levels

    I must have been asleep, until now, and missed the significance and importance of what follows.
    In post #11 here:
    http://forums.adobe.com/thread/375478?tstart=30
    on 19 March 2009 Chris Cox (Adobe Photoshop Engineer - his title on the old forums) said this, in a discussion regarding sharpness in CS4:
    "You can't have perfectly sharp images at all zoom levels.". Unfortunately, my experience with CS4 since its release late last year has repeatedly confirmed the correctness of this statement.
    What makes this statement so disturbing is that it contradicts an overwhelming amount of the pre- and post-release promotional advertising of CS4 by Adobe, to the effect that the OpenGL features of CS4 enable it to display sharp images at all zoom levels and magnifications. What is surprising is that this assertion has been picked up and regurgitated in commentary by other, sometimes highly experienced, Ps users (some unconnected with, but also some directly connected with, Adobe). I relied upon these representations when making my decision to purchase the upgrade from CS3 to CS4. In fact, they were my principal reason for upgrading. Without them, I would not have upgraded. Set out in numbered paragraphs 1 to 6 below is a small selection only of this material.  
    1. Watch the video "Photoshop CS4: Buy or Die" by Deke McClelland (inducted into the Photoshop Hall of Fame, according to his bio) on the new features of CS4 in a pre-release commentary to be found here:
    http://fyi.oreilly.com/2008/09/new-dekepod-deke-mcclelland-on.html
    Notice what he says about zooming with Open GL: "every zoom level is a bicubically rendered thing of beauty". That, when viewed with the zooming demonstrated, can only be meant to convey that your image will be "sharp" at all zoom levels. I'm sure he believes it too - Deke is someone who is noted for his outspoken criticism of Photoshop when he believes it to be deserved. It would seem that he must not have experimented and tested to the extent that others posting in this forum have done so.
    2. Here's another Adobe TV video from Deke McClelland:
    http://tv.adobe.com/#vi+f1584v1021
    In this video Deke discusses the "super smooth" and "very smooth" zooming of CS4 at all zoom levels achieved through the use of OpenGL. From the context of his comments about zooming to odd zoom levels like 33.33% and 52.37%, it is beyond doubt that Deke's use of the word "smooth" is intended to convey "sharp". At the conclusion of his discussion on this topic he says that, as a result of CS4's "smooth and accurate" as distinct from "choppy" (quoted words are his) rendering of images at odd zoom levels (example given in this instance was 46.67%), "I can actually soft proof sharpening as it will render for my output device".
    3. In an article by Philip Andrews at photoshopsupport.com entitled 'What's New In Adobe Photoshop CS4 - Photoshop 11 - An overview of all the new features in Adobe Photoshop CS4',
    see: http://www.photoshopsupport.com/photoshop-cs4/what-is-new-in-photoshop-cs4.html
    under the heading 'GPU powered display', this text appears :
    "Smooth Accurate Pan and Zoom functions – Unlike previous versions where certain magnification values produced less than optimal previews on screen, CS4 always presents your image crisply and accurately. Yes, this is irrespective of zoom and rotation settings and available right up to pixel level (3200%)." Now, it would be a brave soul indeed who might try to argue that "crisply and accurately" means anything other than "sharply", and certainly, not even by the wildest stretch of the imagination, could it be taken to mean "slightly blurry but smooth" - to use the further words of Chris Cox also contained in his post #11 mentioned in the initial link at the beginning of this post.
    4. PhotoshopCAFE has several videos on the new features of CS4. One by Chris Smith here:
    http://www.photoshopcafe.com/cs4/vid/CS4Video.htm
    is entitled 'GPU Viewing Options". In it, Chris says, whilst demonstrating zooming an image of a guitar: "as I zoom out or as I zoom in, notice that it looks sharp at any resolution. It used to be in Photoshop we had to be at 25, 50 , 75 (he's wrong about 75) % to get the nice sharp preview but now it shows in every magnification".
    5. Here's another statement about the sharpness of CS4 at odd zoom levels like 33.33%, but inferentially at all zoom levels. It occurs in an Adobe TV video (under the heading 'GPU Accererated Features', starting at 2 min 30 secs into the video) and is made by no less than Bryan O'Neil Hughes, Product Manager on the Photoshop team, found here:
    http://tv.adobe.com/#vi+f1556v1686
    After demonstrating zooming in and out of a bunch of documents on a desk, commenting about the type in the documents which is readily visible, he says : "everything is nice and clean and sharp".
    6. Finally, consider the Ps CS4 pdf Help file itself (both the original released with 11.0 and the revised edition dated 30 March 2009 following upon the release of the 11.0.1 update). Under the heading 'Smoother panning and zooming' on page 5, it has this to say: "Gracefully navigate to any area of an image with smoother panning and zooming. Maintain clarity as you zoom to invididual pixels, and easily edit at the highest magnification with the new Pixel Grid." The use of the word "clarity" can only mean "sharpness" in this context. Additionally, the link towards the top of page 28 of the Help file (topic of Rotate View Tool) takes you to yet another video by Deke McClelland. Remember, this is Adobe itself telling you to watch this video. 5 minutes and 40 seconds into the video he says: "Every single zoom level is fluid and smooth, meaning that Photoshop displays all pixels properly in all views which ensures more accurate still, video and 3D images as well as better painting, text and shapes.". Not much doubt that he is here talking about sharpness.
    So, as you may have concluded, I'm pretty upset about this situation. I have participated in another forum (which raised the lack of sharp rendering by CS4 on several occasions) trying to work with Adobe to overcome what I initially thought may have been only a problem with my aging (but nevertheless, just-complying) system or outdated drivers. But that exercise did not result in any sharpness issue fix, nor was one incorporated in the 11.0.1 update to CS4. And in this forum, I now read that quite a few, perhaps even many, others, with systems whose specifications not only match but well and truly exceed the minimum system requirements for OpenGL compliance with CS4, also continue to experience sharpness problems. It's no surprise, of course, given the admission we now have from Chris Cox. It seems that CS4 is incapable of producing the sharp displays at all zoom levels it was alleged to achieve. Furthermore, it is now abundently clear that, with respect to the issue of sharpness, it is irrelevant whether or not your system meets the advertised minimum OpenGL specifications required for CS4, because the OpenGl features of CS4 simply cannot produce the goods. What makes this state of affairs even more galling is that, unlike CS3 and earlier releases of Photoshop, CS4 with OpenGL activated does not even always produce sharp displays at 12.5, 25, and 50% magnifications (as one example only, see posts #4 and #13 in the initial link at the beginning of this post). It is no answer to say, and it is ridiculous to suggest (as some have done in this forum), that one should turn off OpenGL if one wishes to emulate the sharp display of images formerly available.

    Thanks, Andrew, for bringing this up.  I have seen comments and questions in different forums from several CS4 users who have had doubts about the new OpenGL display functionality and how it affects apparent sharpness at different zoom levels.  I think part of the interest/doubt has been created by the over-the-top hype that has been associated with the feature as you documented very well.
    I have been curious about it myself and honestly I didn't notice it at first but then as I read people's comments I looked a little closer and there is indeed a difference at different zoom levels.  After studying the situation a bit, here are some preliminary conclusions (and I look forward to comments and corrections):
    The "old", non-OpenGL way of display was using nearest-neighbor interpolation.
    I am using observation to come to this conclusion, using comparison of images down-sampled with nearest-neighbor and comparing them to what I see in PS with OpenGL turned off.  They look similar, if not the same.
    The "new", OpenGL way of display is using bilinear interpolation.
    I am using observation as well as some inference: The PS OpenGL preferences have an option to "force" bilinear interpolation because some graphics cards need to be told to force the use of shaders to perform the required interpolation.  This infers that the interpolation is bilinear.
    Nothing is truly "accurate" at less than 100%, regardless of the interpolation used.
    Thomas Knoll, Jeff Schewe, and others have been telling us that for a long time, particularly as a reason for not showing sharpening at less than 100% in ACR (We still want it though ).  It is just the nature of the beast of re-sampling an image from discrete pixels to discrete pixels.
    The "rule of thumb" commonly used for the "old", non-OpenGL display method to use 25%, 50%, etc. for "accurate" display was not really accurate.
    Those zoom percentages just turned out to be less bad than some of the other percentages and provided a way to achieve a sort of standard for comparing things.  Example: "If my output sharpening looks like "this" at 50% then it will look close to "that" in the actual print.
    The "new", OpenGL interpolation is certainly different and arguably better than the old interpolation method.
    This is mainly because the more sophisticated interpolation prevents drop-outs that occurred from the old nearest-neighbor approach (see my grid samples below).  With nearest-neighbor, certain details that fall into "bad" areas of the interpolated image will be eliminated.  With bilinear, those details will still be visible but with less sharpness than other details.  Accuracy with both the nearest-neighbor and bilinear interpolations will vary with zoom percentage and where the detail falls within the image.
    Since the OpenGL interpolation is different, users may need to develop new "rules of thumb" for zoom percentages they prefer when making certain judgements about an image (sharpening, for example).
    Note that anything below 100% is still not "accurate", just as it was not "accurate" before.
    As Andrew pointed out, the hype around the new OpenGL bilinear interpolation went a little overboard in a few cases and has probably led to some incorrect expectations from users.
    The reason that some users seem to notice the sharpness differences with different zooms using OpenGL and some do not (or are not bothered by it) I believe is related to the different ways that users are accustomed to using Photoshop and the resolution/size of their monitors.
    Those people who regularly work with images with fine details (pine tree needles, for example) and/or fine/extreme levels of sharpening are going to see the differences more than people who don't.  To some extent, I see this similar to people who battle with moire: they are going to have this problem more frequently if they regularly shoot screen doors and people in fine-lined shirts.   Resolution of the monitor used may also be a factor.  The size of the monitor in itself is not a factor directly but it may influence how the user uses the zoom and that may in turn have an impact on whether they notice the difference in sharpness or not.  CRT vs LCD may also play a role in noticeability.
    The notion that the new OpenGL/bilinear interpolation is sharp except at integer zoom percentages is incorrect.
    I mention this because I have seen at last one thread implying this and an Adobe employee participated who seemed to back it up.  I do not believe this is correct.  There are some integer zoom percentages that will appear less sharp than others.  It doesn't have anything to do with integers - it has to do with the interaction of the interpolation, the size of the detail, and how that detail falls into the new, interpolated pixel grid.
    Overall conclusion:
    The bilinear interpolation used in the new OpenGL display is better than the old, non-OpenGL nearest-neighbor method but it is not perfect.  I suspect actually, that there is no "perfect" way of "accurately" producing discrete pixels at less than 100%.  It is just a matter of using more sophisticated interpolation techniques as computer processing power allows and adapting higher-resolution displays as that technology allows.  When I think about it, that appears to be just what Adobe is doing.
    Some sample comparisons:
    I am attaching some sample comparisons of nearest-neighbor and bilinear interpolation.  One is of a simple grid made up of 1 pixel wide lines.  The other is of an image of a squirrel.  You might find them interesting.  In particular, check out the following:
    Make sure you are viewing the Jpegs at 100%, otherwise you are applying interpolation onto interpolation.
    Notice how in the grid, a 50% down-sample using nearest-neighbor produces no grid at all!
    Notice how the 66.67% drops out some lines altogether in the nearest-neighbor version and these same lines appear less sharp than others in the bilinear version.
    Notice how nearest-neighbor favors sharp edges.  It isn't accurate but it's sharp.
    On the squirrel image, note how the image is generally more consistent between zooms for the bilinear versions.  There are differences in sharpness though at different zoom percentages for bilinear, though.  I just didn't include enough samples to show that clearly here.  You can see this yourself by comparing results of zooms a few percentages apart.
    Well, I hope that was somewhat helpful.  Comments and corrections are welcomed.

  • IPad 3 vs calibrated macbook display

    Dear all, please allow me to ask the following.
    I have a rather old Macbook unibody 13" dating back, at the end of 2008 which i use to process my photos. Its screen is calibrated using a Spyder 3 calibrator.
    When i process the images and transfer them to my iPad 3 however, the images look more "alive" due to the retina display but also, the color shifts towards a cooler tone.
    So which display is reproducing the most accurate color? From a recent online article i learned that iPad3 reproduces correct sRGB colors but even so, the colours differ with my calibrated macbook screen.
    So which device is showing the correct colors?

    Well, iPad is supposed to be correctly calibrated on the sRGB color space, right out of the box and is supposed to correctly reproduce the color tones, given images are saved with the sRGB profile.
    On the other hand, the laptop is not, but its calibrated using Spyder, hence theoretically, it is also calibrated to the sRGB color space.
    Since they are both using the same profile, how come they have such difference in color?

  • Ipod battery display

    first off, i have to say, this site ahs a horrible layout. (i know whoevers reading this probably cant do anything about it, but whatever)
    i just updated my ipod software. i used to have a program that changed the ipod battery display to a numeric value (it was more accurate). i know that its a 3rd party application, but it worked fine before the update.
    after the update, it said the numeric value was still on, but all it showed was the bar. i downloaded 2 other programs that did the same thing (all independent of each other), and they all told me the same thing.
    Apparently this option was turned off in the update? if so, would you please either include this option in the next update or send me something to fix that please? i really like the numeric value. thank you in advance
    -Aramil

    "its been nearly 2 weeks, and i have yet to recieve even the slightest reply"
    You are completely misunderstanding the concept of these Discussion forums. They are provided by Apple so that users of their products can ask questions and receive advice from other users to help out with technical problems. No one from Apple responds to these posts.
    You were using some third party (non authorized) program to hack the iPod, so it's a little unfair to blame Apple if it no longer works wouldn't you say? I don't know which program you were using, but very often when Apple updates the iPod software, these 3rd party programs need to be re written accordingly. Maybe you should ask the writer of this program what steps he/she has taken to ensure that the program works with the current iPod software?
    I would take issue with the statement that the numeric display makes the meter more accurate. With the bar display your iPod is "guessing" about battery state and demand to draw the bar, so it will also "guess" which number to display. So even if it displays alleged battery capacity to six decimal places, while that looks all scientific and digital and more accurate, it is working from the same info as the silly black bar. It's just a cosmetic change.
    However, if you want that numeric display, this is the place to ask for it.
    http://www.apple.com/feedback/ipod.html

  • MacBook Pro 15 inch 2011- Blue tint in Windows 7 using Bootcamp 4.0

    I have recently installed Windows 7 using Bootcamp on my Macbook pro 15 inch 2011. The display under Mac OSX is accurate with vibrant colors using the default Color LCD profile. In WIndows 7, everything is tinted blue and all colors are dimished because of this. I have tried importing my Color LCD icc profile from Mac OSX Lion and this has helped to some degree. This alleviates the blue tint but gives Windows a warmer, browner tint. Unfortunately this makes the display too dim and crushes the blacks. Additionally, some games revert to the default windows calibration with the blue tint. Games look bad either way as the Color LCD profile in Windows makes the game too dark and the default calibration makes everything blue. Has anyone been able to correct this in Bootcamp?Thanks

    msconfig / chkdsk / Windows 7 DVD system repair or rollback restore point
    Ccleaner 3.09 - remove temp files and caches / also run Registry section
    Start: Run %TEMP% (select all) and remove all the contents in there
    Heat, fans, poor Apple drivers, I would not blame ATI but I would upgrade/install those.
    And yes BC 4.0 probably was not tested fully.

  • Color Management and Preview

    Hi All,
    Wonder if anyone can help me? I wasn't sure where to post this, so hope it is in the correct forum.....
    I have got a MacPro with the NVIDIA GeForce 8800GT graphics card, together with two displays - an EIZO ColorEdge CG222W, which I bought recently, and a Dell 2005FPW that I have had for a while.
    Both monitors are calibrated, but today I noticed something rather strange. I was working on an image for an exhibition in Photoshop CS3. Whilst in PhotoShop I dragged from one display to the other, and everything was fine - the image had the same accurate colors on both displays.
    I went ahead and exported the image to a jpg in preparation of sending to my online printer, and included the icc profile in the image. The color profile attached to the image is Adobe RGB. However, when I open the image in Preview, the colors are displayed much darker (and less accurate) on the EIZO monitor, whereas the Dell is still accurate. But if the image is opened in ColorSync, then I see the same colors displayed on both monitors.
    It looks like Preview and EIZO are having some weird issues with one another..... can anyone think why this would happen? I'm wondering if it is as simple as Preview not being able to read icc profiles, or if there is something else going on.....
    Thanks
    Simon

    Thanks for the answer - currently the Dell screen has the menu bar on it and is the primary. I had also wondered if it had something to do with primary/secondary, and I did try switching them so that the EIZO became the primary. but it didn't have an effect.
    Note - the EIZO does have hardware calibration, whereas the Dell only has software calibration.....

  • Approval Preview Graphic not syncing with Approval Text in SRM 4.0

    We are currently using SRM 4.0 in an extended classic setup and have the following issue.
    When creating a shopping cart/PO in SRM and looking at the approvals pending/processed or waiting the approval preview screen is different when choosing graphical display as opposed to text display.  The graphical display appears to be the accurate source showing the current status of the work items but the text display will show something the work item as awaiting approval by the individual.  The In process since description and processed on descriptions on the text display appear correct but the status and the approver are inaccurate. 
    Has anyone seen this issue before and can lend a hand to what could possible resolve it?

    Hi Robert,
    I faced the same in SRM 4.0. As usually users are not looking into the text display
    we did not further follow up on that issue. Usually Graphics and Text should be
    same. It depends on the saved cookies and stored internet pages corresponding
    to the used Browser Version.
    I cannot recap SAP's answer on that issue - did you also open an OSS?
    Thanks,
    Claudia

  • Quick preview looks better than processed raw image...??

    Hey all, probably a bit of a "newb" question here... so forgive me, and thank you...
    Using a D7000 and often times when I shoot - the preview image on the camera looks BRIGHT, VIVID and ROBUST ... after import however - when reviewing my shots, JUST as I arrow over to the next shot - many of the preview images tend to look better than the processed image that aperture displays once it's done spinning it's wheels.
    Perhaps I've messed up a Raw Fine Tuning setting?
    When I click on quick preview and browse through an import, the pictures truly look nicer to me than the when aperture processes them.
    Without question, the display on my acer monitor is a far cry from the miniature compressed image on the back of my nikon, however the more i shoot, the more I realize a disconnect between what I think I should see, and what I'm ultimately seeing in Aperture.
    Are their specific settings to fine tune the import of raw d7000 shots?
    Thanks much.. gk

    I take it you're shooting and processing RAW images?
    It's worth remembering that if you have a picture style selected (i.e. vivid etc), your camera might be applying extra contrast and saturation etc to the image you see on the back of the camera. Camera manufacturers do this so that we can give our pictures some extra punch and colour automatically.
    I'd also be wary of comparing what you see on your camera to what you see on your monitor. Unless both are calibrated, you shouldn't trust either of them 100%. The best example of monitor calibration is going to look at TV's in an electronics store. You'll probably notice that in a wall of TV's, some pictures will be darker, some lighter, some more vivid, some more saturated. Using a calibration tool adjusts the picture your screen and monitor displays so that it is 'accurate'.
    It's a bit like having a room full of scales and adjusting them so that they all read 1 kilogram when a 1 kilogram weight is placed on each of them. Calibrating monitors will mean that when you display an image on it, it will always look the same rather than getting the some light/some dark problem you saw in the TV store.
    It's a tricky subject to explain (don't worry if it doesn't make sense), but you might like to have look around YouTube for videos on the subject.

  • Color space export issues...

    Well. This has been going on for a while. Sometimes it doesnt happen but most of the time when I export my images in srgb the view once uploaded is much depreciated. I proof in srgb 2.1 and embed upon export. The same thing happens with using the boarderfx export plugin. In addition, it seems to happen more after exporting to PS for edit and then exporting the tiffs to jpg later. But happens with normal jpg/raws as well. Thanks a lot for all the help and hopefully I can get this solved
    Aaron

    I don't mean "Quick Look" in Leopard. I mean Quick Preview in Aperture (a little button in the lower right corner that turns yellow when selected). I believe that, although not as seductive as the native screen display, Quick Preview is more accurate.
    Here is what I mean by accurate:
    • Quick Preview changes the display significantly and matches closely prints made from Aperture when printed on the paper for which I have selected the proof profile (in Aperture under View / Proofing Profile).
    • Quick Preview displays an image that is nearly identical to that printed from Photoshop, InDesign, and Acrobat when the same paper and profile are used.
    • Prints made from Aperture are nearly identical to those made from PS, ID, and Acrobat using the same parameters.
    • Quick Preview displays an image that is nearly identical to that displayed in Photoshop, InDesign, and Acrobat when the same print profile is selected for soft proofing in these applications.
    There is a wild card though: these days I print mostly using perceptual rendering intent. Aperture does not appear to provide any direct control over rendering intent or black point comp for softproofing. Native display in Aperture (with the correct profile selected for on screen proofing) is much closer to a PS soft proof of the same image using RelCol rendering intent in PS. In this case the difference seems to be perhaps in the implementation of black point comp.
    It would sure be nice if we had full documentation of this stuff and didn't have to make suppositions about its functionality based on empirical data.
    If you know of a way to soft proof in Aperture (that permits my workflow instead of imposing one) that allows for simultaneous editing I would be much obliged.
    OK, I just did a little more poking around. Quick Preview appears to preview the image in the working color space, and what I was calling "native display mode" is using the selected soft proof profile. But on my system it is not accurate with my printer profiles. Not even close. Like I said this might be due to lack of control over rendering intent and black point compensation. (I also just noted that the soft proof display does not incorporate BPC. You can see this by creating a preview through the print dialog and comparing the result to the screen display.)
    Though soft proofing seems to be broken, at least for me, I have answered my own question: My working space is close enough to (and obviously includes the full gamut of) my print profiles that I can select my working space profile for soft proofing (which it does use accurately since Aperture is also using the same profile to convert the RAW file to for export) which will allow me to edit while soft proofing in a valid color space with consistent rendering intent and application of BPC.
    Flame off, over and out.

  • Rendering Intent for monitors?

    We hear a lot about rendering intent in relation to printing, but isn't the concept equally important in relation to monitor displays? And how does this operate in LR2? Or is Windows (in my case) responsible for making the rendering decisions?
    I have a calibrated and profiled monitor (I use a spyder) and am aware that LR uses (Linear) ProPhotoRGB as the working colour space. Obviously the range of colours recordable by my SLR and retained within the LR working space far exceeds the gamut of colours that can be displayed. Does anyone know how LR2 handles the out-of-gamut colours for display? I suspect that it employs a sort-of relative rendering intent and simply clips these colours - i.e. displays them as the 'nearest' available colour. Alternatively, it could use a perceptual intent, shifting the whole spectrum into the monitor colour space, thus altering all colours but keeping the relation between colours. This is important stuff, isn't it?
    The use of the 'relative' method would accurately preserve those colours that can be displayed so that printer output can be predicted with some certainty. But, it would also mean that I will never be aware from the display (not even imperfectly) of the full range of colours that are in the file. At worst, it could cause 'posterisation' as the out-of-gamut colours accumulate at the edges of the displayable colour space.
    The use of the 'perceptual' method, however, will mean that I can be aware of the range of colours but that these will be imperfectly displayed with implications for the accurate assessment of likely printer output.
    Anyone care to take up this discussion?

    >I'm sure my monitor does not support the percptual intent. Might it be possible in a future edition (LR3 ?) to have LR offer the perceptual intent irrespective of monitor capability? This is not an area that I know much about.
    It is not dependent on the monitor. It only depends on the calibration solution you use. The calibration software needs to support generating the perceptual intent.
    >At present this means, I think, that I may be losing the range of colour tones (levels of saturation) that may be presented in certain scenes that I photograph. They will be unviewable. Does this matter given that most of these 'lost' tones will be unprintable too? Well, it might matter. For one thing some printers can now (I believe) print colours outside the (mainly) SRGB space that my monitor is capable of. For another, I am losing the ability to choose which is more important to me - the full colour tonal range, or colour accuracy. The problem of posterization has already been mentioned.
    Yes, that is a problem. Currently you can only really solve it by using a wider gamut monitor (becoming more and more common) or using the right calibrator (I Believe the eye one calibrators can do v4 profiles). In photoshop you can set the system up to desaturate everything on the monitor by a certain percentage to avoid these issues. You cannot do that in Lightroom. For the second part, basically every printer nowadays prints outside of sRGB.
    Many even go beyond adobeRGB. Safe from getting a better monitor there is not much you can do about it.

  • Memory Managemenmt

    This is probably a beginners question but its something you don't read about when learning java because its more about program design principles I suppose...
    I am building a desktop engineering modeling program, its been going fine until I try to load in or create a really really large model at which point I run out of memory.
    So I expanded the memory to 2gb for Java (using command line arguments) to see how much memory the large models take up. My question is when designing such an application where its impossible to guarntee the user will not run out of memory, how do you handle this in an application? Any advice would be great....
    Thanks,
    Ken

    No your 100% correct, Im not just trying to copy but because I dont have much of an idea where to start I thought its best to look what big software companies do.
    Yes your method sounds good and I will need to think of something such as that but the problem is getting the program to know whats important. For example the whole geometry of the area is made up of points either surveyed or designed by an engineer. Then lines connect between the points to create edges. A road for example would have a series of points along its edges connected by lines to show what it is. I may post a screenshot to help people understand what this is all about.
    I know that its not 100% efficient because at a really zoomed out level the program will draw many of these points in the same place because the program cant display the information any more accurate than a pixel.
    From looking at this I think this will take some major rethinking to achieve a good solution.
    Thanks for the suggestions, I already plan on implementing Quadtree for other reasons but thats a different battle...
    Ill take a look at all the things you sent me and see if it sparks any ideas.
    Thanks,
    Ken
    EDIT: Sorry kajbj, it just sounded like you where describing something like google maps thats all. I use alot of software but It doesnt mean I know how all the internals work.
    Yes 3D games I can understand for sure especially when the player has limited viibility. What does make me worried is the sound of "writing/reading data to disc" in terms of such common tasks as zooming/panning, makes me paranoid that it will cap the speed or reliability but I am not an expert so I may be wrong.
    Edited by: MarksmanKen on 24-Jun-2010 14:14

  • Moving images in CS5?

    When in CS5 I left click and move an image and reposition it  the image becomes sharper when I stop it returns to it's original state is this normal?? If so which view is the actual file?

    To add to what Mylenium has said, OpenGL displays are resampled using more accurate math, as a GPU has more power for doing such things quickly.
    But this doesn't only depend on the power of your hardware, but also on the quality of the display driver software, which is what implements the OpenGL standard for your hardware.  Thus it's almost always a good idea to visit the web page of the maker of your video card and download and install the latest driver software for your particular hardware.  OpenGL implementations in general are steadily improving to this day.
    What video card do you have?  If it is very old (e.g., more than 2 or 3 years) it may be that you want to consider replacing it with a fast modern card.  Such cards that will run Photoshop extremely well can be found for under $100 and will generally make all your computing tasks snappier.
    -Noel

  • Basing a Datablock on Program Unit

    Hi
    Beginner and confused
    I have a procedure with (IN OUT) ref cursor returning a RECORD. There is a button on the same form which when pressed generates the where clause of the sql statement in the procedure.
    I need to display this query in a datablock. Could someone give me a step by step process for doing this? Thanks

    Hi Bernd,
    Thanks for the reply. I have used the count query from the example itself, and it is working, and when I display the count, it is accurate.
    I tried the package in the sql plus and it is returning values
    I guess somehow the parameters what is set in the datablock wizard is a problem. I have double checked the parameters and it is accurate as given in the example
    I also tried creating a new canvas and built up the datablock using the same steps (as given in the example) but still I did not get any output.
    Deepak

  • MyRIO Analog IO

    I'm new to myRIO and LabVIEW. My questions are as follow:
    1. I tried a simple program to acquire analog sine wave from a function generator and display it in PC (labVIEW). The sine wave displayed in labVIEW was pretty accurate for approximately 50Hz. When I increased the frequency (sweep the frequency of function generator slowly) the displayed signal was badly distorted. At some frequency, pure sine wave appeared again. The frequency of waveform displayed in PC was different from that has been set in function generator. How to configure the sampling and acquire analog signal accurately?
    2. I tried to produce a sine wave at analog output port using signal generated by using 'Simulate Signal' VI. I tried the default frequency which is 10.1Hz but the waveform observed in oscilloscope was around 1 Hz. If I set the frequency to 10Hz, DC line appeared in oscilloscope. 
    Can anyone please point out the problems and discuss on how to solve it? I should be grateful if anyone could share some example code.
    Many thanks.

    trieu wrote:
    Hello everybody!
    I'm new to myRIO and Labview . can you help me? documents manual myRIO of ''Digital Image Processing with LabView''
    thank you very much!
    This qurstion seems unrelated to MyRIO analog IO, so why did you append to a seemingly unrelated thread?
    Please start a new thread and explain what your question is. It is not clear what you want.
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for

  • How do I change the style of one word in a paragraph

    I've been fighting with this issue for a while and after google searches and switching to word for mac (ugh) I have decided to ask for help! I have a paragraph in my document, and I'd like to format a single word in that paragraph to a different font

  • Will You Plese solve it

    I am having problem with my jsp code . wat i done is as follow 1. I created a web page which simply inputs the user information like name,surname etc. 2. I am passing this information to jsp page say first_jsp.jsp 3. There i am using <jsp:useBean> an

  • Trying to connect 2nd laptop to WRT54g,

    This laptop has only XP installed, I am trting to use a wireless G card. Going through set up I am getting "wireless adptr not installd or software not installed". I didn't have this problem on other laptop, is this missing a driver or something?I tr

  • Retention periods same for the deletion and for archiving then deletion

    Hi, Is Retention periods same for the deletion and for archiving then deletion. Can we delete canceled messages directly in XI wihtout archiving them ? I want only deltion of messages from XI. Regards Sree

  • Transfer of play lists from OS X to Windows xp

    I want to tranfer the contents of iTunes from my Apple computer to my windows xp computer. The transfer of the music files together with their titles etc. works very well via an external hard disc, but how can I transfer the play lists?