Changing bit depth

Is there any way of changing bit depth in A3? I scan 48 and 16 bit TIFs and would like to drop certain pics to 24/8 (for storage reasons). I'd obviously prefer not to have to export to PS if possible. Thanks.

I don't know the technical trade-offs of _how_ the reduction in bit depth is done, but there must be a simple, probably automate-able and even batch-able way to effect the conversion.  As an example, you could set Aperture "External Editor File Format" to "TIFF (8-bit)" and simply run "Edit with {external editor}", save the file in the external editor, and then back in Aperture delete your 16-bit originals.
The point -- which you've already understood -- is that you are creating new, different files and adding them to the list of files already listed in your Aperture Library.  For your workflow, this makes sense and seems a good use of Aperture.  You may find that the cost savings represented by the reduction in storage space achieved by lowering the bit-depth is not worth the administrative cost of creating these replacement files and deleting the others.

Similar Messages

  • Changing bit depth/greyscale palette in Photoshop CS4

    Hi,
    I have Photoshop CS4 and I am trying to reduce a 256 greyscale palette to a 16 greyscale palette for a greyscale gif image.It is set at Indexed Color.
    How do I do this?
    I have tried to search for this information online, but I have had no success.
    Thank you!

    Hi,
    Could you give me a step by step how to do that?
    Do I go to Image -> Mode?
    I cannot find the palette in Photoshop to change from 256 to 16.
    I am a beginner in Photoshop, thank you!

  • Photoshop changing bit depth?

    If I click on Open image on a 16 bit image in Raw Photoshop opens as 16 bit. But if I open it from within photoshop or from Bridge, it opens as 8 bit (when it was a 16 bit TIFF image)

    When encountering an issue with Raw 16 bit images in Bridge opening in Photoshop CS4 as 8 bit images, check the following:
    Go to:
    Adobe Bridge>Edit>Camera Raw Preferences
    In the Camera Raw Preferences window, ensure that the first field, "Save image settings in:", is set to: "Camera Raw database".
    Please note that selecting "Sidecar ".xmp" files" can result in 16 bit images selected in Bridge opening in Adobe Photoshop CS4 as an 8 bit image.

  • PS-CC 2014 (latest) - changing 32-bit depth file to 16-bit

    I opened a file, happened to be 32-bit depth (I don't have too many, not even sure how that one got to be 32-bit), and because a lot of filters don't work with that, I changed it to 16-bit, but when I did that, the HDR toning dialogue appeared, and I HAD to click OK on it to get the image to convert to 16-bit (CANCEL on the dialogue leaves it at 32-bit). So you have to choose a METHOD that you can set so there's no change to the image. Weird & wrong . . .

    Sorry, no solution to the problem, but a confirmation. I do have the same problem (Intuos Pro and Pen & Touch) showing the problem with ACR 8.5 and Photoshop (2014, CC and 6, these were updated with ACR 8.5) . No such problem before ACR 8.5 and no problem with LR 5.5 (also containing ACR 8.5).
    I hope there will be a solution from Adobe soon, since sems to be caused by the ACR update.
    Windows 8.1 in my case and latest Wacom Intuos driver installed.
    Thomas

  • Can I change the bit depth on images in pdf files?

    I have a lot of pdf files that were scanned in 24 bit colour. I'd like to convert some of them to greyscale or black and white, and reduce the resolution to make them smaller.
    I can see how to reduce the resolution with Save As Other/Optimized PDF, but there are no options there to reduce bit depth. Is there any way to do this?

    Thanks, I think I've worked out how to use them. I found a fixup called "Convert color to B/W", but it seems to convert to greyscale, not black and white.
    I found this page describing how to convert to both greyscale and monochrome. It says the only way to do monochrome is to convert to tiff first:
    http://blogs.adobe.com/acrolaw/2009/10/converting-color-pdf-to-greyscale-pdf-an-update/
    If that's the case then Acrobat Pro isn't going to help me, but that was written in 2009. Does anyone know if true black and white conversion has been made available since then?

  • Filter plugin. Problem after change image depth.

    Hi All !
    I already wrote filter plugin it work fine but only for image depth 8bit, after i change image depth on 16 or 32 bits I getting error msg box from photoshop.
    I try change on 'destination.colBits = 8' or 'destination.colBits = pChannel->depth' or ' (pChannel->bounds.bottom - pChannel->bounds.top) * pChannel->depth;'  but all the same.
    PixelMemoryDesc destination;
    destination.data = data; //*pixel
    destination.depth = pChannel->depth;
    destination.rowBits = (pChannel->bounds.right - pChannel->bounds.left) * pChannel->depth;
    destination.colBits = 8;
    destination.bitOffset = 0 ;
    Please help someone !
    Very Thanks in Advance !
    All code below:
    //  Gauss.cpp
    //  gauss
    //  Created by Dmitry Volkov on 30.12.14.
    //  Copyright (c) 2014 Automatic System Metering. All rights reserved.
    #include "Gauss.h"
    #include "GaussUI.h"
    #include "FilterBigDocument.h"
    #include <fstream>
    using namespace std;
    SPBasicSuite* sSPBasic = NULL;
    FilterRecord* gFilterRecord = NULL;
    PSChannelPortsSuite1* sPSChannelPortsSuite = NULL;
    PSBufferSuite2* sPSBufferSuite64 = NULL;
    int16* gResult = NULL;
    void DoParameters ();
    void DoPrepare ();
    void DoStart ();
    void DoFinish ();
    void DoEffect();
    void GaussianBlurEffect(ReadChannelDesc* pChannel, char* data);
    void ReadLayerData(ReadChannelDesc* pChannel, char* pLayerData);
    void WriteLayerData(ReadChannelDesc* pChannel, char* pLayerData);
    DLLExport MACPASCAL void PluginMain(const int16 selector,
                                        FilterRecordPtr filterRecord,
                                        intptr_t * data,
                                        int16 * result)
        sSPBasic = filterRecord->sSPBasic;
        gFilterRecord = filterRecord;
        gResult = result;
        try {
                if (sSPBasic->AcquireSuite(kPSChannelPortsSuite,
                                                   kPSChannelPortsSuiteVersion3,
                                                   (const void **)&sPSChannelPortsSuite))
                    *gResult = errPlugInHostInsufficient;
                if (sSPBasic->AcquireSuite( kPSBufferSuite,
                                                   kPSBufferSuiteVersion2,
                                                   (const void **)&sPSBufferSuite64))
                    *gResult = errPlugInHostInsufficient;
                if (sPSChannelPortsSuite == NULL || sPSBufferSuite64 == NULL)
                    *result = errPlugInHostInsufficient;
                    return;
                switch (selector)
                    case filterSelectorParameters:
                        DoParameters();
                        break;
                    case filterSelectorPrepare:
                        DoPrepare();
                        break;
                    case filterSelectorStart:
                        DoStart();
                        break;
                    case filterSelectorFinish:
                        DoFinish();
                        break;
        catch (...)
            if (NULL != result)
                *result = -1;
    void DoParameters ()
    void DoPrepare ()
    void DoStart ()
        if (*gResult == noErr)
            if (doUi())
                DoEffect();
    void DoFinish ()
    #define defColBits 8
    void DoEffect()
        // Start with the first target composite channel
        ReadChannelDesc *pChannel = gFilterRecord->documentInfo->targetCompositeChannels;
        // Calculation width and height our filter window
        int32 width = pChannel->bounds.right - pChannel->bounds.left;
        int32 height = pChannel->bounds.bottom - pChannel->bounds.top;
        fstream logFile ("/Volumes/Macintosh Media/GaussLogFile.txt", ios::out);
        logFile << endl << "top " << pChannel->bounds.top;
        logFile << endl << "bottom " << pChannel->bounds.bottom;
        logFile << endl << "left " << pChannel->bounds.left;
        logFile << endl << "right " << pChannel->bounds.right;
        logFile << endl << "depth " << pChannel->depth;
        logFile << endl << "vRes " << gFilterRecord->documentInfo->vResolution;
        logFile << endl << "hRes " << gFilterRecord->documentInfo->hResolution;
        // Get a buffer to hold each channel as we process. Note we can using standart malloc(size_t) or operator new(size_t)
        // functions, but  Adobe recommend sPSBufferSuite64->New() for memory allocation
        char *pLayerData = sPSBufferSuite64->New(NULL, width*height*pChannel->depth/8);
        if (pLayerData == NULL)
            return;
        // we may have a multichannel document
        if (pChannel == NULL)
            pChannel = gFilterRecord->documentInfo->alphaChannels;
        // Loop through each of the channels
        while (pChannel != NULL && *gResult == noErr)
            ReadLayerData(pChannel, pLayerData);
            GaussianBlurEffect(pChannel, pLayerData);
            WriteLayerData(pChannel, pLayerData);
            // off to the next channel
            pChannel = pChannel->next;
        pChannel = gFilterRecord->documentInfo->targetTransparency;
        // Delete pLayerData
        sPSBufferSuite64->Dispose((char**)&pLayerData);
    void GaussianBlurEffect(ReadChannelDesc* pChannel, char *data)
        // Make sure Photoshop supports the Gaussian Blur operation
        Boolean supported;
        if (sPSChannelPortsSuite->SupportsOperation(PSChannelPortGaussianBlurFilter,
                                                    &supported))
            return;
        if (!supported)
            return;
        // Set up a local rect for the size of our port
        VRect writeRect = pChannel->bounds;
        PIChannelPort inPort, outPort;
        // Photoshop will make us a new port and manage the memory for us
        if (sPSChannelPortsSuite->New(&inPort,
                                      &writeRect,
                                      pChannel->depth,
                                      true))
            return;
        if (sPSChannelPortsSuite->New(&outPort,
                                      &writeRect,
                                      pChannel->depth,
                                      true))
            return;
        // Set up a PixelMemoryDesc to tell how our channel data is layed out
        PixelMemoryDesc destination;
        destination.data = data; //*pixel
        destination.depth = pChannel->depth;
        destination.rowBits = (pChannel->bounds.right - pChannel->bounds.left) * pChannel->depth;
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        // Write the current effect we have into this port
        if (sPSChannelPortsSuite->WritePixelsToBaseLevel(inPort,
                                                         &writeRect,
                                                         &destination))
            return;
        // Set up the paramaters for the Gaussian Blur
        PSGaussianBlurParameters gbp;
        int inRadius = 1;
        Fixed what = inRadius << 16;
        gbp.radius = what;
        gbp.padding = -1;
        sPSChannelPortsSuite->ApplyOperation(PSChannelPortGaussianBlurFilter,
                                                                 inPort,
                                                                 outPort,
                                                                 NULL,
                                                                 (void*)&gbp,
                                                                 &writeRect);
        if (sPSChannelPortsSuite->ReadPixelsFromLevel(outPort,
                                                      0,
                                                      &writeRect,
                                                      &destination))
            return;
        // Delete the temp port in use
        sPSChannelPortsSuite->Dispose(&inPort);
        sPSChannelPortsSuite->Dispose(&outPort);
    void ReadLayerData(ReadChannelDesc *pChannel, char *pLayerData)
        // Make sure there is something for me to read from
        Boolean canRead;
        if (pChannel == NULL)
            canRead = false;
        else if (pChannel->port == NULL)
            canRead = false;
        else if (sPSChannelPortsSuite->CanRead(pChannel->port, &canRead))
            // this function should not error, tell the host accordingly
            *gResult = errPlugInHostInsufficient;
            return;
        // if everything is still ok we will continue
        if (!canRead || pLayerData == NULL)
            return;
        // some local variables to play with
        VRect readRect = pChannel->bounds;
        PixelMemoryDesc destination;
        // set up the PixelMemoryDesc
        destination.data = pLayerData;
        destination.depth = pChannel->depth;
        destination.rowBits = pChannel->depth * (readRect.right - readRect.left);
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        // Read this data into our buffer, you could check the read_rect to see if
        // you got everything you desired
        if (sPSChannelPortsSuite->ReadPixelsFromLevel(
                                                      pChannel->port,
                                                      0,
                                                      &readRect,
                                                      &destination))
            *gResult = errPlugInHostInsufficient;
            return;
    void WriteLayerData(ReadChannelDesc *pChannel, char *pLayerData)
        Boolean canWrite = true;
        if (pChannel == NULL || pLayerData == NULL)
            canWrite = false;
        else if (pChannel->writePort == NULL)
            canWrite = false;
        else if (sPSChannelPortsSuite->CanWrite(pChannel->writePort, &canWrite))
            *gResult = errPlugInHostInsufficient;
            return;
        if (!canWrite)
            return;
        VRect writeRect = pChannel->bounds;
        PixelMemoryDesc destination;
        destination.data = pLayerData;
        destination.depth = pChannel->depth;
        destination.rowBits = pChannel->depth * (writeRect.right - writeRect.left); //HSIZE * pChannel->depth * gXFactor*2;
        destination.colBits = defColBits;
        destination.bitOffset = 0 ;
        if (sPSChannelPortsSuite->WritePixelsToBaseLevel(
                                                         pChannel->writePort,
                                                         &writeRect,
                                                         &destination))
            *gResult = errPlugInHostInsufficient;
            return;

    Have you reviewed your code vs the Dissolve example? It is enabled for other bit depths as well.

  • Maximum Bit Depth /Maximum Render Quality  Questions

    Maximum Bit Depth
    If my project contains high-bit-depth assets generated by high-definition camcorders, I was told to select Maximum Bit Depth because Adobe Premiere Pro uses all the color information in these assets when processing effects or generating preview files. I'm capturing HDV using the Matrox RTX-2 Hardware in Matrox AVI format.
    When I finally export my project using Adobe Media Encoder CS4, will selecting Maximum Bit Depth provide better color resolution once I post to Blu-ray format?
    Maximum Render Quality
    I was told that by using Maximum Render Quality, I maintain sharp detail when scaling from large formats to smaller formats, or from high-definition to standard-definition formats as well as maximizes the quality of motion in rendered clips and sequences. It also renders moving assets more sharply. It's my understanding that at maximum quality, rendering takes more time, and uses more RAM than at the default normal quality. I'm running Vista 64 Bit with 8 GIGs of RAM so I'm hoping to take advantage of this feature.
    Will this also help to improve better resolution when I finally export my project using Adobe Media Encoder CS4 and post to Blu-ray format?
    Does it look like I have the specs to handle Maximum Bit Depth and Maximum Render Quality when creating a new HDV project with the support of the Matrox RTX 2 Hardware capturing in Matrox AVI format? See Below Specs.
    System Specs
    Case: Coolmaster-830
    Op System: Vista Ultima 64 Bit
    Edit Suite: Adobe Creative Suite 4 Production Premium Line Upgrade
    Adobe Premiere Pro CS 4.0.1 update before installing RT.X2 Card and 4.0 tools
    Performed updates on all Adobe Production Premium Products as of 03/01/2009
    Matrox RTX2 4.0 Tools
    Main Display: Dell 3007 30"
    DVI Monitor: Dell 2408WFP 24"
    MB: ASUS P5E3 Deluxe/WiFi-AP LGA 775 Intel X38
    Display Card: SAPPHIRE Radeon HD 4870 512MB GDDR5 Toxic ver.
    PS: Corsair|CMPSU-1000HX 1000W
    CPU: INTEL Quad Core Q9650 3G
    MEM: 2Gx4|Corsair TW3X4G1333C9DHXR DDR3 (8 Gigs Total)
    1 Sys Drive: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB
    Cache SATA 3.0Gb/s
    2 Raid 0: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s Using Intel's integrared Raid Controller on MB

    Just some details that i find useful on maximum render depth
    You really need it even with 8bit source files, when using heavy grading/multiple curves/vignettes. If after grading you see banding, go to sequence > sequence settings from the top menu and check "maximum bit depth (ignore the performance popup), then check again your preview (it will change in a second) to see if banding is still present in 32bit mode. If no banding, you must check it when exporting, if  banding is still there, change your grading, then uncheck it to continue with editing.
    Unfortunately Maximum bit depth exporting is extremely time-consuming, but can really SAVE YOUR DAY when facing artifacts after heavy grading, by completely or almost completely eliminating banding and other unwanted color distortions.
    Use it only for either small previews or the really final output.
    Best Regards.

  • Final cut pro millions of colours + bit depth question

    Hello
    I am working in final cut pro 7 and I wanted to know what is the maximum bit depth I can export using the Prores codec? All I see in compression settings for rendering my timeline when wanting to render with Prores 4444 is the option for 'millions of colors' and 'millions of colors +' I was under the impression that millions of colors refered to 8 bit... does the alpha channel mean I can get 10 bit? can the alpha channel hold 2 more bits per channel or something? Or is there no way I can export a 10bit file using the Prores codec within fcp7..? is it all just 8bit. -and when I select 422HQ there is no advanced options for millions of colors..what does this mean? is the only way to get 10bit out of fcp7 to render with the 10bit uncompressed codec? and if so can I render the timeline in prores while im working with it then delete all the renders and change the render codec to 10bit uncompressed, will this now be properly giving me 10bit from the original 4444 12 bit files i imported in the beginning..?
    Any help is much appreciated

    ProRes is 10-bit. Every ProRes codec is 10-bit...LT, 422, HQ.  Not one of them is 8-bit.  Except for ProRes 444...that's 12 bit.

  • Bit depth question

    Hello,
    My audio device can sample up to 24-bit. When I create a 32-bit file in Audition 2.0 and record material using the audio device, Audition tells me the file is 32-bit, and it is indeed twice the size of an equivalent 16-bit file. But is it really a 32-bit file, and could there be any issues with the file? It seems fine in every way.
    Thank you.

    No audio hardware actually samples at greater than 24-bit, because there's absolutely no point - even 24-bit depth isn't actually usable in full; a system like this could in theory digitise a noise floor way lower than can be physically achieved by any mic and preamp system available - you'd need at least liquid nitrogen cooling of all the components before you even started to look at the rest of the problems!
    So why does Audition record in 32-bit? Well, 32-bit Floating Point digitising is a bit different. The actual 24-bit signal is recorded quite faithfully (although not quite in the form of an integer signal) and the other 8 bits are essentially left as zeros during recording. What they actually are is scaling bits. And this comes in seriously useful when processing. What it means is that your original signal can be scaled up and down without loss. In an integer engine, if you decided to throw away 30dB of a signal, saved the result and reopened the file and amplified it again, you'd find that your 24-bit signal was effectively 19-bit. In Audition, if you did exactly the same thing with a 32-bit Floating Point signal, you wouldn't lose any bit depth at all. No it's not magic - it's just the effect of storing the original data in a form that inherenently doesn't get modified when an amplitude change is asked for - it's only the scaling system that does, and this doesn't contain audio data.
    So yes it's a real 32-bit signal - but not all of those 32 bits are used until you do some processing.

  • Reduce bit depth or convert color profile first? (best practices question)

    For making final deliverable files from working files, is it best to convert to a new color profile before reducing bit depth? Or vise versa?
    Our working files are 16 bit with the ProPhoto color space. Our deliverable files are 8 bit AdobeRGB tiffs and sRGB jpegs. We convert using relative colorimetric with black point compensation. Does it make a difference which order these changes are made in?
    Thanks in advance for your help!

    A profile conversion recalculates RGB values, so yes, it should be done in 16 bit depth.

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

  • Maximum bit depth-maximum render quality when dynamic linking

    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?

    Hi jbach2,
    I understand your confusion.  I'm like that most of the time I'm working. *chuckle*  The two settings you mentioned are two completely different parameters affecting (or is it effecting) your video. You do not need to enable them within the sequence itself unless you want to preview video on you program monitor at the highest quality.  I personally don't recommend it, as it's a tremendous resource hog, (the program even warns you when you try to click them) and unessecary for improving final output.  Again, do not enable these options in your sequence settings if you are only wanting a high quality export. Doing so will greatly reduce your editing performance unless you have a high-end system. ...and even then I don't think its worth it unless you're editing on a huge screen with a Director who wants to see everything at a maximum quality during the edit process.
    Keeping it simple...
    Resizing your final output video? Use Maximum bit depth.
    Starting or working with high bitdepth sources? Use Max Bit Depth.
    When/where do I enable these? In the AME only. ^_^
    Why?:
    Enabling the Max bit and Max render only needs to be done when you are exporting.  They both serve different functions. 
    Max Render aids in the scaling/conversion process only.  My understanding is that you never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video.  For example, when rendering a 1080p timeline out to a 480p file format, you'll want to use MRQ to ensure the best scaling with the least amount of artifacts and aliasing.  If you're exporting at the same size you're working with, DON'T enable MRQ.  It will just cost you time and CPU. Its only function is to do a high quality resizing of your work.
    Maximum bit depth increases the color depth that your video is working with and rendering to.  If you're working with video that has low color depth, then I don't believe it will matter.  However, if you're working with 32 bit color on your timeline in PPro and/or After Effects, using lots of graphics, high contrast values, or color gradients, you may want to enable this option. It ultimately depends on the color depth of your source material.
    The same applies to After Effects.
    Create something in AE like a nice color gradient.  Now switch the same project between 8,16,32 bit depth, and you will see a noticable difference in how the bit depth effects your colors and the smoothness of the gradient.
    Bit depth effects how different plugins/effects change your overall image.  Higher depth means more colors to work with (and incidentally, more cpu you need)
    Just remember that "DEPTH" determines how many colors you can "fill your bucket with" and "QUALITY" is just that, the quality of your "resize".
    http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html
    Check out this adobe blog for more info on color depth ^_^  Hope that helps!
    ----a lil excerpt from the blog i linked to above---
    Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.
    Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32″ icons on some of the effects in Premiere Pro CS5.
    jbach2 wrote:
    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?
    Message was edited by: SnJK

  • Changing color depth in illustrator

    ok my problem is I am trying to upload a file that I made in illustrator but the site will not accept the jpg because it is "32 bit depth". I am using the export feature so preserve the quality of the image. It works if I save for the web but it does not look good at all, and I would like to print this image. Is there anyway you can change illustrator or save 24 bit depth instead of 32???

    You need to export as RGB instead of CMYK. Illustrator exports 8 bits per channel, so CMYK means four channels or a total bit-depth of 32 bits. Save for web only exports RGB so of course there’s no issue with bit depth.

  • Bit Depth 8, 16 or 32???

    I have been working in a bit depth of 8. I mostly do titles for wedding videos in motion. Is something going to improve if I change this setting. What is it for?
    Thanks in advance

    Mostly if you are working with files like exr files that contain float information - you can avoid clipping and banding and the like. Doesn't have too much impact for stuff you create directly in Motion, but can help for things like extensive blurs or where you see posterization occuring. As Patrick states, though, big hit in render time. IMHO you have no need for it for what you are doing.
    Mark

  • How do I reduce the bit depth of images to 1-bit within Acrobat 9?

    I am hoping a simple solution exists within Acrobat 9 for reducing the bit-depth of images to 1-bit.
    I know of two methods that both seem more like workarounds. One, edit the image using Photoshop. Two, without Photoshop, export the page as a 1-bit PNG and recreate the page in Acrobat. It seems like one of the preflight fixups should be able to get it done with the right settings. But, it's a labyrinth of unfamiliarity.

    There's no predefined 1-bit conversion in Preflight because it doesn't make sense. Preflight will not dither bitmaps, so most images will become black squares. Extreme color conversion is only intended for text/vector objects.
    If you want to try it anyway, you can create a custom Fixup if you have a  1-bit ICC profile.
    Preflight > Single Fixups
    Options menu > Create new Preflight Fixup
    Name it something like "Convert all to 1-bit BW"
    Search for "Convert colors" in the type of fixup box and add it
    Destination tab > Destination > your ICC profile for 1-bit black
    Uncheck "use destination from Output Intent"
    Keep everything else as default, though I'd suggest using "Embed as output intent for PDF/X" if you're making PDF/X documents
    Conversion Settings tab > All Objects + Any Color (except spot) + Convert to destination + Use rendering intent
    Press the + key to duplicate this step, and change the second copy to "Spot Color(s)"
    Press + again and change the third copy to "Registration color"
    Save the fixup and run it.
    In case you don't have a 1-bit  ICC profile installed, I've attached one.

Maybe you are looking for

  • Set pie slice color

    Hey guys,          I am creating pie chart based on xml data, which can have at most 6 categories, but not every time.  For consistency sake I would like pie slice to be colored by category, i.e every time I see category 1 it should be colored with c

  • APD vs Read from a cube using Function module

    Hi, Please could someone tell me which is the best practice for the below case: I want to sent information for one cube to another and in the transformation read from a third cube.  I think I have this two options: -     Option1: In the transformatio

  • Installing a flash player on a powerbook G4 OS X version 10.5.8

    I have a Powerbook G4 Mac OS X version 10.5.8 with a 1.5GHz powerPC G4 processor with 1.25 GB of memory. I have tried many different ways of installing a flash player that I have found from forums. All have failed or maybe I did not do it right. Plea

  • Simplepass for windows (8.1) logon

    I recently had my Windows 7 OS crash after shutting off during an upadte. I had my harddrive backed up and saved then wiped the HD. Windows 8.1 was then installed and I have not been able to load the simplepass onto my computer with teh windows finge

  • How to fix skewed table columns and rows after re-import XML

    My question is regarding XML Import in InDesign CS3. I have a XML that has a table of 5 columns and 5 rows, when I import it into InDesign, the table shows up fine with 5 columns and 5 rows. However when I revise my table to have an additional column