Flatten transparency without lossy JPEG compression

Acrobat is using lossy JPEG compression (quality: medium) on some images after transparent objects were flattened. Even if the images had a ZIP compression before, Acrobat will use a lossy JPEG compression. The result is a loss of quality. There are also differences in quality within a single image, cause some images are divided in parts with a ZIP compression and with a lossy JPEG compression.
Is there a workaround to use Acrobat's transparency flattening without JPEG compression? The problem appears in Acrobat 7, 8 and 9.

Hi Cropas!
As stated above my PDF export settings you can read that this option is set to OFF. <Create Tagged PDF: Off>.
Indd can not cope with this option for embedded PDF document indd?

Similar Messages

  • Is the JPEG compression option on TIF files lossy or lossless?

    Is the JPEG compression option on TIF files lossy or lossless?

    I accidentally used the JPEG compression on some important TIFF images.  I usually use
    no compression but didn't realise the option had been chosen.  I can't easily re-scan.  I have
    read about lossless JPEG and with TIF considered a lossless format I had hoped this may
    be the case.

  • Is there a trick to converting JPG from RGB to CMYK without the lossy re-compression?

    So I've got around 750 images in multiple categories and folders. Each image has 6 variants - 16 bit TIFF from over-eager beaver photographer, 16 bit TIFF, extracted backgrounds (done by student workers, who do not use English as we are not in the US), then, standardized 3000x2000 (or other suitable aspect ratio) PNG with corrected edges for web and light print, an 1800x1200 JPG for web and a few smaller PNG standards also for web.
    The 16 bit TIFFs with Extracted BG are the root files, and a series of actions builds the PNG's. The standardized PNG's are then pumped through actions and Image Processor to create output files.
    So I just discovered that a seemingly random number of these are in CMYK. That hasn't been a problem up until now because our website and our printed materials have no problem with CMYK and RGB files.
    But now we are starting to hand out these images to our customers - who are only interested in the 1800x1200 JPG and maybe 900x600 PNG files, which are most easily pulled directly from our website rather than sending them the entire 180gb graphics directory.
    And some of them are telling us that the pictures are being rejected. Most notably, Ebay does not allow CMYK files.
    So damage control- rebuild our entire library of images from the 16 bit TIFFs
    OR
    rebuild only a couple of different sizes and upload from there, replacing the pictures on the next major update (over the next year).
    I am *NOT* concerned about preserving color since 99% of our products are black and shades of grey, with only a hint of color.
    I am concerned about the degradation of changing mode on 60-70% compressed images, then resaving again with the lossy JPG compression.
    I'm about to start ripping into things with my actions and replacing around 1500 files online, so I'd like to be sure that I'm doing things in the most sensible way.
    If there's a tool that can do this conversion without another pass of compression, that would be ideal.

    My understanding of the JPG is only middling. I thought I understood that it uses anchor pixels and either a translation table of some sort or difference mapping, using 8 bits per piece of information.
    If that were the case, surely changing the translation from CMYK to RGB would be fairly simple.
    In this case, the usage is Ebay and they only accept JPG, PNG (and maybe BMP and GIF, I didn't look that closely), but require RGB. I was actually quite surprised to find that JPG allows CMYK since, as you say, anyone dealing with CMYK is going to be dealing with commercial printing and few people who deal with commercial printing would play around with JPG.
    I always stick to TIFF or PSD for workflow, but JPG is popular for a reason - when it comes to web, JPG is the only format that can deliver manageable file sizes with full-screen or "large" images for web. Our top level banner photo is 2590x692 and needs to be under 400kb for sane download speeds. PNG couldn't touch that. Even with the aforementioned 1800x1200, PNG is nearly 2mb, while I can maintain very decent quality with a 500kb file with JPG that works well for 'zoom in' type usage.
    So there's no way around JPG. It's just annoying that the first person to touch a random selection of the pics was primarily an Illustrator user and saved *some* of the pics in CMYK mode.
    It's like that old story about the farmer who didn't want anyone to steal his watermelons, so he cleverly posted a sign "None of these watermelons are poisoned", only to find a note the next day saying "Now, One of these watermelons is...".
    Far more work to fix 'some' of the images compared to just doing it right the first time.
    But then again, for workers like that, if you can't trust them with an easy job, you could hardly trust them with more complicated jobs...

  • Creating a PDF with transparency without exporting to PDF (Mac)

    Hi
    I need to created a PDF from InDesign that has unflattened (native) transparency without using the export PDF command ( because of the flaw seen here: http://forums.adobe.com/thread/537751?tstart=0 )
    I have tried exporting as an EPS and that flattens transparency and creating PS does as well.
    Any other ideas, settings or applications/utilities to try?
    Thanks
    -Andrew

    Thanks everyone that was what I thought.
    Peter: If we knew when it was happening then we can fix it, unfortunately there is no indicator when it will happen. We get jobs from all over North America from all kinds of designers so it could happen on any job.
    It is hard to say how many job it happens on as in most cases it wouldn't be noticed by the client as the amount is so small, and in some case it would happen in the bleed that gets trimmed.
    So far we have noticed the issue when it happens in a critical situation and haven't had any spoils but it is just a matter of time.
    We are going into Annual Reports soon as well as proffesional sport tickets both of which are critical clients.
    -Andrew

  • Is there any way to reduce the JPEG compression ap...

    I'm wondering if there is any way to reduce the fierce amount of JPEG compression applied to photos taken with the 6220 classic? I'm 99.99% sure that there isn't, but I thought I'd ask anyway.
    I'm a professional graphic designer with 15 years experience, and as such understand the technicalities of digital imaging better than most.
    What the general public fails to understand is that ever higher megapixelage doesn't automatically equate to ever higher quality images.
    The 6220 classic has a 5MP camera, which is one of the reasons I bought it, along with the fact that it has a Xenon flash and a proper lens cover. Its imaging quality also generally gets very positive reviews online.
    However, the 6220 classic takes far poorer photos than my 5 year old Olympus digital camera which only shoots 4MP. Why is this? Many reasons. The Olympus has a much larger imaging chip, onto which the image is recorded (physical size as opposed to pixel dimensions), a far superior lens (physical size & quality of materials), optical (not digital) zoom, and the ability to set various levels of JPEG compression, from fierce (high compression, small files, low quality images) to none at all (no compression, large files, high quality TIFF-encoded images).
    When I first used the camera on the 6220 classic (I've never owned a camera phone before) I was appalled at the miniscule file sizes. A 2592 x 1944 pixel image squashed into a few hundred kilobytes makes a mockery of having decent pixel dimensions in the first place, but then the average consumer neither cares about nor would notice the difference. They're not going to be examining & working on their images in Photoshop on a 30" Apple Cinema Display.
    Is fierce JPEG compression (and an inability to alter it) the norm with camera phones, or do other camera phones (perhaps from other manufacturers) allow greater latitude in how images are compressed?
    Thanks.
    Solved!
    Go to Solution.

    Believe me, I was very aware that this was a phone with a camera attached, not a dedicated camera, before I bought it. I went into this with my eyes open. I knew the lens, imaging chip, zoom, etc, would all be grossly inferior, but given all of this, surely the phone manufacturers should help to compensate for this by adding a few lines of code to the software to reduce (or ideally remove) the JPEG compression, or at least give the user the option to do so if they want? The fierce compression just makes images obtained with compromised hardware even worse than they would have been otherwise.
    It adds insult to injury and is totally unnecessary, especially given that the memory card in the 6200 classic is 1GB but the one in my Olympus is only 128 MB! It's not as if lack of storage space is an issue! On the Olympus I can only take about 8 pictures without compression (although I could obviously buy a much larger memory card). On the 6220 classic, given the ridiculous amount of compression, there's room for over 1200 photos! It would be far better to let 70 uncompressed images be stored than 1200 compressed ones. Does anyone seriously need to take over a thousand photos on a camera phone without having access to a computer to offload them? I doubt it.
    Also, compressing the images requires processing power, which equals time. If they were saved uncompressed, the recovery time between shots would be reduced, although obviously writing the larger files to memory may offset this somewhat.
    Just to give people an idea, an uncompressed 8-bit RGB TIFF with pixel dimensions of 2592 x 1944 takes up approximately 14.5 MB of space. (The exact number of bytes varies slightly depending on the header information stored with the file). The 3 photos I've taken so far with the 6220 classic (and that I've decided to actually keep) have files sizes of 623, 676 & 818 KB respectively. An average of these 3 sizes is 706 KB. 706 KB is less than 5% the size of 14.5 MB, which means that, on average, the camera, after is records the 5038848 pixels in an image, throws over 95% of them away.
    I'm deeply unimpressed.

  • N8: Missing JPEG compression settings and gallery ...

    1) Where's the use of a fine 12 MP camera if a harsh JPEG compression algorith destroys almost all photos taken ?
    PLEASE introduce a setting for adjusting the compression strength.
    I know there are solutions available already - but these only work with flashing the phone.
    2) After updating some social network software the button for opening the gallery (right after taking a photo) vanished - and now shows an icon for uploading the photo instead of opening the gallery. ARGH ! - Even deinstalling that update did not bring the gallery button back. I now curse myself (and Nokia) for installing that senseless update.
    That gallery button was such a nice workaround for checking the quality of a photo taken:
    That instant photo display after shooting does not allow zooming - so it's of no use because you cannot check the quality; without zooming in, you cannot see if a picture taken was out of focus or blurred by hands shaking.
    So PLEASE: Restore the gallery button OR lets us zoom a photo taken right after shooting.
    It's of no use instantly uploading a picture to social networks if you can't check if the quality is sufficient.

    Hape: There are always people who like everything. There are even people who like getting slapped in the face. So this shouldn't  be an excuse for every nonsense possible.
    The problem: That new button is just useless because you wouldn't upload a picture prior to knowing if it really is of the quality needed: On the N8s small screen, even blurred or out-of-focus pictures look ok. You'll only see the differences after zooming in.
    But you CAN'T zoom in using the quick view feature right after taking the photo - you need to open the picture taken using the gallery.
    Of course you may open the gallery via the menu - but you need to scroll down for finding the right menu entry. Takes unnecessary time and is a source of error.
    A QUICK review should be a QUICK review - you don't want to miss the next photo opportunity just because you waste your time fiddling with the menu entries just because Nokia destroyed a working system by introducing a button which is of no use if you cannot check the photo's quality prior to using it.
    And again: Why doesn't deinstalling restore the previous state ? - As said: I deinstalled that senseless update - but that ugly button is still there.
    So again:
    PLEASE, Nokia: Remove that senseless button OR let us zoom photos in quick view.

  • How to automate the flatten transparency in illustrator?

    How to automate the "Flatten Transparency option" in illustrator via javascript. Please share with me.
    Thanks

    I'm looking for the same thing.
    app.executeMenuCommand('Flatten Transparency'); works in CC but all it does is it brings up the window and you still have to click OK to make the script continue.
    Before anyone asks why do  you want to do it etc. In print thereare instances where you have to open and edit (not text) large numbers of pdfs to change colours, outline fonts etc otherwise there's trouble on print devices.
    My script for instance looks for 100K blacks and replaces them with rich black, looks for empty text frames, overprint, bleed settings, clipping paths etc.
    The only way to edit successfully a pdf without fonts is to place it and flatten transparency outlining fonts and then work on it.
    So back to the point. After the window pops up, is there a way to simulate a keystroke like "Enter" or maybe app.executeMenuCommand('Flatten Transparency'); takes arguments like:
    app.executeMenuCommand('Flatten Transparency',preset_name);...?
    Anyone? Adobe SDK team? :-)

  • After Preflighting a PDF, using Convert to CMYK, Flatten Transparency and Prepress Profile Convert to CMYK only the resultant PDF has a grubby halo along the edge of some white type sitting on an image. The type is part of the image.

    I am using a 27" iMac 3.2 GHz Intel Core 5, 8 GB Memory, running Yosemite 10.10.1. 
    The version of Acrobat that I am using is: Acrobat XI Version 11.0.10
    After Preflighting a PDF, using Convert to CMYK, Flatten Transparency (high resolution) and Prepress Profile "Convert to CMYK only" the resultant PDF has a grubby halo along the edge of some white type sitting on an image. The type is part of the image which is 300 dpi.
    It is like the image isn't really 300 dpi but has been artificially boosted to that to avoid being tagged by Preflighting, but when Preflighting the file it knows the original resolution.
    I have screen grabs which illustrate the problem perfectly but do not know how to post them, if indeed they can be.
    Any help or comments gratefully received.

    Without the files and possibly screen prints, it is virtually impossible to assist you.
              - Dov

  • Large .jpeg compression/artifacts?

    I'm struggling with Muse's jpg compression, which I can't seem to bypass. I'm uploading very large images (2560px wide) for full-screen slideshows which have been optimized for the web in Photoshop. These images have large light backgrounds and hence Muse's jpg compression produces noticeably artifacted areas. I want to make sure it is indeed Muse that's the problem, and not Chrome. I'm previewing these locally. Thanks.

    First, I have to say the delta between the screenshots is extremely small. I've had multiple people drop into my office for other reasons and none could see differences without me pointing them out using a pixel magnifying tool.
    That said, here are some thoughts regarding this specific case.
    Given this appears to be a photograph of black and white line art, it's a very problematic case for JPEG compression. To get a high quality result for this specific use case you'd want to start with a Camera RAW image from the camera (to avoid the camera introducing JPEG compression artifacts) and then go directly to a lossless image format such as PNG or GIF, rather than JPEG. For this specific subject matter going from Camera RAW directly to PNG/GIF would provide the best result, but at the cost of page load speed since the PNG/GIF image will likely be several times larger than a JPEG.
    I expect what's occurred in this case is that the original image was a JPEG from a camera that was resized smaller and then re-encoded as JPEG.
    The encoding as JPEG in the camera would introduce some artifacts but due to the very high resolution image the artifacts would be very small. Then the image was resized smaller. Resizing alters the image by using one of any number of algorithms to combine/average a set of pixels into a single pixel. The most common high quality approach is bicubic resampling. When resizing smaller this has the side effect of softening any hard edges within an image resulting in a final image that's sometimes considered ever so slightly blurry or "softer" than the original. I see this in the format.com example, in that it looks every so slightly soft or blurry compared to the PS and Muse examples. The algorithm available in PS and used by Muse when resizing smaller is bicubic sharper. This approach combines bicubic resampling with a very small amount of sharpening to counteract the blurring/softening effect of the resizing. For the specific subject matter in your image and the JPEG artifacts that were likely introduced before the image was resized, the sharping results in making the edges of the JPEG artifacts more noticeable (along making all the edges in the image crisper).
    Without the URL for the webpage and the original image file (and probably the .muse file), I can only speculate on exactly what's being generated and why, but hopefully the above information is helpful.

  • I can't save tif with jpeg compression method

    Hello, I've trouble to saving image in tif format with jpeg compression method.
    With the same image this option some times is unavailable, randomly, I don't know how.
    After some practice I leave that if I open an image previously saved in tif-jpeg the option turn on for the next save.
    I've got both CS5 and CS4, and the odd thing is this occour on all.
    Someone have triks?
    Thanks.
    Luca

    I know very well the difference betwin a background layer and transparent layer...
    I'm writing about a bug, or interference from something, when I want saving an image in tiff format with jpeg compression.
    I'm asking if someone had a similar trouble and know a solution and/or trik to solve this!!!
    I hope make myself clear.

  • "convert colors" causes jpeg compression?

    I recently had to re-install Acrobat, and since doing so whenever I run my preflight profile which is set to convert spot colors to cmyk, it's also apparently increasing the jpeg compression at the same time. Everything seems to be gaining bad jpeg artifacts after the color conversion (even when the colors being converted have nothing to do with the images).
    It may be unrelated, but it also seems to be creating all sorts of ICC profile non-cmyk colors in the process. I.e., before "convert to cmyk for digital printing" I run a preflight that simply checks for non-cmyk colors. This profile warns me that there is a Pantone color in the ad. I check the separation preview and find only one or two things with Pantone colors. I then run the convert fix. The Pantone colors go away, but now there are sometimes dozens of items that are showing up as ICC profile colors, and this seems unfixable. I can't create a profile to convert them.
    What's going on? Are there settings somewhere that can select the degree of (or lack of) jpeg compression? Why is it compressing the file at all? I have no such setting selected in the preflight profile. I tried creating a profile that does absolutely nothing at all but convert to cmyk, and it's still causing these problems.
    This is using Acrobat 8. It's the same version, same disks, we used before, but these problems are new. I may have a setting wrong somewhere.

    You are not making sense, and your methodology is fundamentally flawed.
    Hitting Command+S without editing the file is NOT "saving".  It's not doing anything at all.  The program just idles.
    You can verify this by observing that the file's modification date does not change.
    Even your limited but flawed methodology will show the degradation if you change even a single pixel before hitting Command+S.  Then you will be degrading the image.  Just make a one-pixel change, for instance with the pencil tool, then save.  That is the same as doing a Save As.
    Note that in your original query you were indeed changing a file by converting it to a different color space.  THAT is a change.
    Independently from the above, your methodology of comparing two layers blended in Difference mode has the inherent limitation of the monitor's performance in displaying the shadows.  Your monitor, no matter how high-end, will NOT show you a difference between a 0,0,0 pixel (R,G,B,) and 0,0,1 or 1,0,0 or 1,0,1 for instance.  SAme goes for 1,0,2 etc., until you reach the lower threshold of your particular monitor.
    There are two preferred methods of comparing two layers to see if they're identical.
    Comparing allegedly identical images in Photoshop
    The first one, championed by the late, lamented author and guru Bruce Fraser, is as follows [direct quote by copy and paste]:
    A better way of comparing images with identical pixel dimensions is to use [Image menu >] Apply Image… > Subtract with an Offset of 128.
    Difference only shows pixels that are lighter in the source than in the target (or maybe it's the other way around—I forget) where Subtract with Offset 128 shows differences in both directions.
    Pixels that are identical in both images come in as RGB 128 gray, those that are different come in at a value that exactly reflects how different they are.
    It also makes it much easier to spot subtle differences…
    === ===
    The second one was suggested by someone in the Color Managament and Photoshop Windows forums, which follows:
    (NOTE: only the methodology is of interest and pertinent, not the questionable context in which it has brought up and used.)
    * 1) Open the two images to be compared in Photoshop
    * 2) Move one image as a layer over the other one
    * 3) select "Difference" as blending mode in the layers palette
    * 4) now the whole image should appear seemingly black on the monitor
    [So far this is the traditional, "time honored" method.]
    * 5) select the magic wand tool with these settings: Tolerance: 0/ Anti-alias: no/ Contiguous: no/ Sample All Layers: yes
    * 6) click somewhere into the formerly gray area
    [This refers to an image of a Color-Checker type of card that had wide gray border around it. The test, therefore, requires a pure gray image in the image, something highly unlikely to change, in order for the magic wand to select all pure-black images (255,255,255). Such a border can easily be created around an image by increasing the canvas size and filling the newly created space with pure gray (128,128,128). ]
        Explanation: you just selected all completely black pixels (0,0,0) i.e. all pixels that are identical in both layers.
    * 7) you should see "marching ants" forming rectangular patterns
    * 8_) invert the selection (Shift Command I)
       Explanation: the selection now covers all the other pixels, i.e. all pixels which are different between both layers.
    * 9) create a new empty layer and select it in the layers palette
    * 10) set the foreground color to white
    * 11) fill the selection with white (Alt+backspace on Windows, accordingly on Mac)
    * 12) set the blending modes of all layers back to normal
        Explanation: you now see all identical pixels in their respective color and all different pixels in white.
    This method is a lot more sensitive than the traditional one which stops at step #4 above.
    Finally:
    jfraze wrote:
    Wow, the level of hostility is amazing on these adobe forums…
    Only because people like you come in here itching for a fight, rather than to seek help.  It's just the way you choose to react—and to intereact with others.
    Wo Tai Lao Le
    我太老了

  • Illustrator CC 2014: Save As PDF, Flatten Transparency Issues

    Hi All,
    We created a PDF presentation that's about 40 pages in Illustrator and had some display issues with previews in Box.  Upon inspecting the bug, we realized that Box's preview function does not display transparency properly (or at least seems to have issue displaying transparency if it hasn't been flattened.)  We tried flattening the transparency of our Illustrator file, but when we do this the PDF has faint box outlines around objects. 
    Can anyone recommend a solution to this issue?
    Thanks!
    P.S.  I'm new to figuring out which PDF Presets to use to optimize my presentation for desktop/iphone/ipad presentation.  Right now we are using Press Quality with Preserve Illustrator Editing Capabilities unchecked and Optimized for Fast Web View. 
    P.P.S. We tried Acrobat 5-8 Compatibility for Box, non worked.  Acrobat 4 with flattened transparency displayed properly, but had the problematic outlines. 

    Anyone have any input?
    Thanks!

  • LR JPEG compression vs. Photoshop JPEG compression

    I haven't found any documentation of the meaning of the 0 - 100% JPEG compression value in LR's (v1 or v2) Export File window. And the default value of 100% is overkill and results in huge files. At least I'm familiar with the Photoshop's 0-12 JPEG quality scale with associated quality names: Low, Medium, High, and Maximum.
    Via trial and error, I have found that LR has the same 13 quality levels as Photoshop and gives the same results, they are just mapped on a 0 - 100% scale. This also means that changing a few percent may not make any change at all, since a quality change only happens about every 7 percent.
    For those who might find it useful, here is a table of the mappings:
    The first column is the Photoshop compression number and name; the second column in the range of Lightroom percentages that will give the same results.
    0-Low 0-7%
    1-Low 8-15%
    2-Low 16-23%
    3-Low 24-30%
    4-Low 31-38%
    5-Med 39-46%
    6-Med 47-53%
    7-Med 54-61%
    8-High 62-69%
    9-High 70-76%
    10-Max 77-84%
    11-Max 85-91%
    12-Max 92-100%

    I looked at this again using PS's 'Baseline Standard' JPEG format option instead of 'Baseline Optimized. LR does not provide the format options Standard, Optimized, and Progressive, but appears to use 'Baseline Standard.' The equivalent compression level LR file size is within 16KB of PS's file size, which is probably due to slight differences in in the file metadata.
    This pretty much confirms LR and PS use the same 'Baseline Standard' JPEG compression algorithms. The PS level 7 reduced quality is also seen at LR's level 54-61 JPEG Quality setting. Jeffrey Friedel mentions this in his analysis of LR's JPEG Quality settings and a reply from Brian Tao:
    http://regex.info/blog/lightroom-goodies/jpeg-quality
    Jeffrey Friedel's comment:
    One thing I find interesting (but don't understand) is that in the first example, the difference in file size between the  47〜53  quality and  54〜61  quality is considerable (49k to 66k bytes), while in the second example, the the same two levels of quality produces essentially the same file size. There seems to be some kind of switch in compression algorithm once Lightroom is at a quality setting of 54 or above that puts the emphasis on encoding the easily-discernible smooth gradients of the sunset example, and if they are lacking in the image, as with the reed-window-shade example, the attempt at extra quality fails, and the file size does not increase. That's my guess, but it's just a guess.
    Brian Tao's Reply:
    This is due to the downsampling (basically, a reduction in resolution) of one or more of the image channels before passing it to the actual compression routine.  Human vision is much more sensitive to changes in luminance (brightness) than chrominance (colour).  JPEG takes advantage of this by reducing the amount of colour information stored in the image in order to achieve higher compression ratios.  Because it is colour and not brightness that is sacrificed, this is called “chroma subsampling”.  Look up that term in Wikipedia for a far better and more detailed description than I can provide here.
    In a nutshell, Adobe products will use either a 4:4:4 subsampling (which is no subsampling at all, and thus full resolution) or 4:2:0 subsampling (both red and blue channels are reduced to one-quarter resolution before compression).  There is no switch to specify the amount of subsampling to use.  In Photoshop, the change from 4:2:0 to 4:4:4 happens between quality 6 and 7.  In Photoshop’s Save For Web, it happens between quality 50 and 51.  In Lightroom, you already noticed that something unexpected happens between 47-53 quality and 54-61 quality.  Guess what levels those correspond to in Photoshop?  6 and 7… exactly as expected.
    You can very easily demonstrate this by creating a worst-case scenario of JPEG chroma subsampling.  Create a small image in Photoshop with a pure blue (RGB = 0,0,255) background.  Now type in some pure red text (RGB = 255,0,0).  For maximum effect, turn off anti-aliasing, so each pixel is either full on red or full on blue. Zoom in to 500% or so for a clear view of the pixels.  Now save the image as a JPEG.  With the JPEG quality dialog visible, you will see a real-time preview of the effects of JPEG compression.  Start at 12, and work your way down to 0, one step at a time.  Watch what happens when you go from 7 to 6.  You can do the same with Save For Web and with Lightroom to confirm where they switch from 4:4:4 to 4:2:0.
    The file size discrepancy is more noticeable in the sunset shot because most of the information (relatively speaking) is needed to encode the gradual change in chrominance values.  There is virtually no luminance detail to worry about, except around the silhouette of the bird.  But in the photo of the reed window shades, the fine detail and texture and lack of colour result in practically no difference going from 4:4:4 and 4:2:0.
    Because of this hidden (and inaccessble) switch, I have been recommending that to be safe, one should never go below quality 7 in Photoshop, or 51 in Save For Web.  In Lightroom, this corresponds to quality 54.
    Hope this helps.

  • What jpeg compression does image capture use to save an image

    I am about to scan colour positive slides from many years ago, using image capture and a scanner (Epson 2450 photo).  Can anyone tell me what jpeg compression is used when the scan is saved to disk?  Further, is there any way to alter the quality of suh compression, from say, medium, to highest?

    You might be able to find it when you export it from Image Capture.

  • Some of Photo (JPEG)-compressed images by Flash Pro are not shown in AIR app (3.7/3.8)

    Does anyone see this issue happening? In Flash Pro it's OK, but in AIR, it's broken.
    https://bugbase.adobe.com/index.cfm?event=bug&id=3558175
    Problem Description:
    Some JPEG-compressed images in swc produced by Flash Pro CS6 is not shown in AIR.
    Steps to Reproduce:
    1. Create a fla with Flash Pro CS6
    2. Put a png image in it and open the property of the image to make sure its compression option is Photo (JPEG)
    3. Produce an swc out of the fla
    4. Create an AIR app that shows the contents in the swc
    Actual Result:
    All images are shown
    Expected Result:
    Some of the images are not shown (nothing is shown where they are supposed to be)
    Any Workarounds:
    Use Lossless (PNG/GIF) for all images

    i was able to get it to work from a suggestion in another thread: if you write a JSFL that goes through all your bitmaps and makes sure they do not uset he default compression of the document, but instead use custom compression (it can match the default however). this worked for me

Maybe you are looking for

  • Personal File Sharing + FileVault = Problems?

    Hello, sorry my poor english. I have enabled 'Personal File Sharing' and I have a small index.html file on my 'Sites' folder. When I try to check my webpage using the supplied link... http://localhost/~MyUserName/ I get this error (403 Forbidden): Yo

  • What technology that can be used for the creation of Need Definition?

    Being on SAP SRM 4.0, we currently have a requirement for creation of Need Definition( Basically a build up for the contract ). Need being collected from the Backend systems need to be validated and approved by Buyers & Lead Buyers in the SRM system.

  • BizTalk Configuration 2013 issue - Error in Group configuration

    Hi, I am configuring BizTalk 2013 in a Virtual machine and SQL server 2012 in another Virtual machine.  I have successfully configured the SSODB database. while configuring the Group's i am getting following error. " exception of type System.Enterpri

  • A Newbie trying to format a micro sd card....

    Hi, I just purchased a 8220 blackberry pearl flip phone on the weekend so far I love it, just have to get used to some of the features but should have the figured out shortly. My problem is that I am trying to format a micro1G San disk memory card, i

  • Transfer email, calendar etc from Outlook PC to Outlook Mac

    Hi all, I've had a look around for this but can't seem to find a solid answer and I'm concerned that something will be missed. I need to transfer all email, calendar, contacts etc from Outlook 2007 on Windows to Outlook 2011 on a Mac. Could anyone po