RGB or 8-bit YUV??

Hi there,
I have a sony sr12 and have ingested my material as apple pro res. I'm trying to apply some basic affects, but get 'effect "" failed to render etc etc'
I went to sequence settings and it was on 10-bit YUV. I changed this to RGB, and it seems to work fine. But i also had a go with 8bit YUV too, and that works fine...
which is the best setting and why?? (layman's terms preferred!!)
Thanks!

thanks philb. i've taken it as a given that my system - even though it's an imac OSX 10.5.7 with an ATI HD 2600 Pro graphics card - can't cope with the most basic of effects at 10bit.
i'm not sure why. it seems 8bit is enough though, based on some tests. many thanks!

Similar Messages

  • File Import Error / distorted image CS3 3.2.0 - Blackmagic Uncompressed 10-bit YUV NTSC 4:3

    Hi All-
    I have been capturing some old VHS tapes using Blackmagic Media Express via a Decklink HD Extreme 2 card (composite in, RCA audio in) and then subsequently editing the files in Premiere Pro CS3 ver. 3.2.0.  But I'm running into a strange problem with the image being very distorted upon import into Premiere.  
    The capture settings in Blackmagic Media Express are as follows: NTSC, Uncompressed 10-bit YUV, 29.97, drop frame.  After capture and before editing in Premiere, I watch the capture file in Windows media player and it looks and sounds fine.
    When I create a new Premiere project to edit the clip, I use the following preset: Blackmagic Design, NTSC, 10-bit YUV, 4 x 3. 
    When I import the .avi file created during the media express capture into the new Premiere project, the image becomes extremely distorted.  However, if I choose any other preset in Premiere, the image looks OK, which seems strange because the other presets do not match the import file settings.  I should note that when I render the distorted looking file out using the Premiere media encoder, the file looks OK.
    Any suggestions?  I cannot figure out why the video is not importing or showing correctly in Premiere. Help!
    Thanks a bunch,
    Rex

    Has this process EVER produced non-distorted video when imported into Premiere?
    If this is a NEW problem, then what has changed since the process last worked properly?
    If this is a FIRST TIME process and problem, my guess is a video driver that is not 100% compatible with what you are doing... you provide no other information, so it could be you need a newer (or, sometimes, older) video driver
    Windows updates have been known to cause problems
    Work through all of the steps (ideas) listed at http://ppro.wikia.com/wiki/Troubleshooting
    If your problem isn't fixed after you follow all of the steps, report back with ALL OF THE DETAILS asked for in the FINALLY section, the questions at the end of the troubleshooting link... most especially the codec used... see Question 1

  • FCP6 any problem using "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing

    I have a long and complex 1080p FCP6 project using ProRes442.  It is made up of mostly high resolution stills and some 1280i video clips.  Rendering has  laways been anightmare.  It takes extremely long and makes frequent mistakes whch have to be re-rendered.   Just today, I discovered the option of selecting  "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing.  The rendering time is cut down to a fraction and even on a large HD monitor I can tell no difference in quality.  I am getting ready to re-render the entire project in 8-bit and just wanted to check if changing to 8-bit would pose some problems and/or limitations that I'm not aware of.  This is not a broadcast or hollywood film thing. But it does represent my artwork and I burn it to bluray so i do want it to look good.  Lke I said, I can tell no difference between the 8-bit and 10-bit color depth with the naked eye.  Thank you all for all the help you have always given me with my many questions in the past.

    Unless you have a 10bit monitor (rare and very expensive) you can not see the difference as your monitor is 8 bit.
    10 bit is useful for compositing and color grading. Otherwise, 8 bit it fine for everything else.
    x

  • Render in 8-bit YUV or Render all YUV material in high-precision YUV

    Friends,
    I'm a wedding videographer and I work with the mini dv cams PD150. In a particular work I will have a lot (I mean a LOT) of color correction to do, this way, should I work my sequence with the setting Render in 8-bit YUV or Render all YUV material in high-precision YUV ?
    Thanks again!!!

    I read your commentary:
    "Selecting this option does not add quality to clips captured at 8-bit resolution when output back to video; it simply improves the quality of rendered effects that support 10-bit precision."
    So, 3 Way Color Corrector is a effect that support 10-bit precision? If I use "Render all YUV material in high-precision YUV" will the video look nicer?
    Thanks again

  • 10-bit RGB really 8-bit?

    When I capture 10-bit RGB files in Final Cut Pro they show up as 8-bit files when I bring them into Shake. Any ideas as to why this happens?

    Interesting. I wasn't sure if Final Cut Pro was capturing 10-bit files properly or not. I've been reading that FCP can only render RGB in 8-bit so I wasn't sure if it was really capturing 10 bit files as 8-bit. I am able to create 10-bit RGB files in After Effects and Shake will default to working in 16-bits when I import them.
    Do you have experience with Glue Tools? Does it allow one to export all the 10-bit data as DPX files?

  • How to convert color image(24 bit) YUV 4:2:2 to gray scale 8 bit image

    I am using sony DFW-X700 color camera for one of vision applications.Does NI Compact Vision System(CVS) support YUV 4:2:2 format(8 bit each).I want to do gray scale processing, so i need to convert the YUV color into gray scale (8-bit) in the software(like Labview). Please suggest us how to do this conversion for better gray scale image clarity from color.

    In YUV color space, Y represents the gray scale; in RGB color space, R=G=B represents gray scale. You can simply set R=G=B=Y, to convert YUV to RGB. If the original color depth is 24 bit, then the result is 24 bit too.
    You can create gray scale color table like this:
    array size = 256;
    [0] = 0x000000;
    [1] = 0x010101;
    [2] = 0x020202;
    [255] = 0xFFFFFF;
    To convert 24 bit gray scale to 8 bit, check every pixel in 24 bit image to find the array index according to the color table, and replace the pixel with array index.
    George Zou
    http://gtoolbox.yeah.net
    George Zou
    http://webspace.webring.com/people/og/gtoolbox

  • Colours washed out - YUV or RGB?

    I just outputted my sequence to DVD and when I play it on the TV, the colours are washed out (on the computer looks fine). My yellow subtitles are almost white (subtitles made in FCP).
    It's been about six months since my last video and I suspect some sequence settings have been changed. I use a Sony MiniDV camera, import via firewire and export via Compressor to DVD, should my render colour spacing be RGB or 8 bit YUV?
    Or am I barking up the wrong tree?
    Cheers,

    Hey,
    I had the same problem when I started to author DVDs...
    DV has a 0 setup (black level) and broadcast TV is 7.5 Setup. Most DVD players have two settings in the DVD player setup menu - one for richer black (0 setup) or regular (7.5 setup). They come with regular black as default. Most people don't even know about this setting on their players.
    But when you're looking at your DV sequences (even on an NTSC monitor), you're looking at the DV footage at 0 setup, with richer blacks.
    Then, when you encode, burn and watch it on your DVD player, you're getting an image with 7.5 setup.
    So, you will always get a discrepancy, unless you set your DVD player and your clients' to richer black, or add about 10 to 15 point of black using a Color Corrector filter inside FCP just before you encode!
    I've decided to adopt the latter, since I've asked around and discovered that pretty much no one messes with that black setting in their DVD players!

  • Render preferences. RGB or YUV?

    I'm a little confuse here. I finished my FCP training last weak,so I'm newbie. Yesterday I was studying fcp7 and I have a question.
    In Sequence Settings (Video Processing tab) there is some options to choose.
    1- Always Render in RGB
    2- Render in 8-bit YUV
    3- ...
    4-...
    When I edit videos for web,multimidia I render RGB, to broadcast,dvd,etc... render YUV? it's correct?
    please help.
    thanks

    If I could afford FCP 7, I would definitely use ProRes 4444.
    Sigh.
    Meanwhile back in my world, I didn't quite understand
    what Luca wrote:
    gogiangigo wrote:
    you may want to offline your edit using Apple ProRes
    (use composite modes if required) and when you are done...
    online everything for the master.
    If I use ProRes 422 for my offline codec, how do I access
    the alpha channel? Seems impossible.
    I was using a bizarre workflow, which I may use again.
    Whenever I had a Shake render with alpha, I would render
    out 2 DVCPRO HD files: One with the RGB data, and one
    B&W with the alpha data. Then I would place them
    both on the FCP timeline, with the RGB track using
    the B&W track in Travel Matte Luma mode.
    That technique works OK, but you've got to remember
    not to premultiply, since with non-alpha clips,
    you don't have the option of setting the alpha to Black.
    Anyway, Luca, if you could explain your idea in a little
    more detail, that would be cool.
    Anyone else out there doing complex layering in FCP?
    FCP 6, that is.

  • RGB vs YUV rendering

    We have been making spots for the web out of DVCPRO HD and DVCPRO 50 material, compressing to h264 for final delivery. One of my editors changed the render setting for our final sequence from RGB to YUV, saying this would help color matching when delivering to the web. I didn't really understand his explanation, so I came here. For DVCPRO HD or DVCPRO 50 material, should I have my rendering set to RGB or YUV? Most of our stuff goes out as SD DVD or Quicktime files for the web.

    Hmmm...the default for DVCPRO HD is 8-bit YUV...so it should always render in that setting. How did it get switched to RGB?
    Shane

  • Photoshop Elements 16 bit/ProPhoto RGB  ?

    At this point in time I don't believe PSE10 or indeed 11 supports 16 bit working or ProPhoto RGB. This is an issue when transferring images from Lightroom for editing in Layers etc. I currently use Photoshop CC for this purpose, which is great but to be honest complete overkill for my purposes. PSE would be absolutley spot in if it could handle editing in 16 bit and/or ProPhoto RGB.
    Can anyone shed any light on future releases of PSE and if/when this functionality will be included.
    I know these facilities aren't there at the moment.
    Thanks.

    LyndonPshop wrote:
    At this point in time I don't believe PSE10 or indeed 11 supports 16 bit working or ProPhoto RGB.
    16 bits : PSE10 and PSE11 support 16 bits, except for layers and local tools.
    Both can handle native Prophoto files. Just try it with Prophoto files from Lightroom. You are limited to sRGB or aRGB if you are working from your camera's raws or jpegs directly in PSE, but since you have Lightroom, you don't miss anything.
    But the real answer is that advanced Elements users without LR do use the ACR module, which works in 16 bits and Prophoto (from raws or jpegs). Nearly all adjustments where 16 bits is important to avoid posterization can be done in ACR. Anyway all your output devices (display or printer) will require 8 bits and sRGB or aRGB.

  • How to Work in 32 Bit in Premier Pro Plug In

    As I understand it, Adobe Premier Pro doesn't support "Smart Render" mode as in After Effects.
    Yet I don't understand how does it support 32 Bit Per Channel input.
    I look at the SDK sample called SDK Noise, at the following code:
    PrPixelFormat destinationPixelFormat = PrPixelFormat_BGRA_4444_8u;
                    if (pixelFormatSuite) {
                            (*pixelFormatSuite->GetPixelFormat)(output, &destinationPixelFormat);
                            if (destinationPixelFormat == PrPixelFormat_BGRA_4444_8u){
                                    ERR(suites.Iterate8Suite1()->iterate(   in_dataP,
                                                                                                                     0,                                                              // progress base
                                                                                                                     linesL,                                                 // progress final
                                                                                                                     &params[NOISE_INPUT]->u.ld,             // src
                                                                                                                     NULL,                                                   // area - null for all pixels
                                                                                                                     (void*)&niP,                                    // refcon - your custom data pointer
                                                                                                                     FilterImageBGRA_8u,                             // pixel function pointer
                                                                                                                     output));     
                            } else if (destinationPixelFormat == PrPixelFormat_VUYA_4444_8u){
                                    ERR(suites.Iterate8Suite1()->iterate(   in_dataP,
                                                                                                                     0,                                                              // progress base
                                                                                                                     linesL,                                                 // progress final
                                                                                                                     &params[NOISE_INPUT]->u.ld,             // src
                                                                                                                     NULL,                                                   // area - null for all pixels
                                                                                                                     (void*)&niP,                                    // refcon - your custom data pointer
                                                                                                                     FilterImageVUYA_8u,                             // pixel function pointer
                                                                                                                     output));
                            } else if (destinationPixelFormat == PrPixelFormat_BGRA_4444_32f) {
                                    // Premiere doesn't support IterateFloatSuite1, so we've rolled our own
                                    IterateFloat(   in_dataP,
                                                                    0,                                                              // progress base
                                                                    linesL,                                                 // progress final
                                                                    &params[NOISE_INPUT]->u.ld,             // src
                                                                    (void*)&niP,                                    // refcon - your custom data pointer
                                                                    FilterImageBGRA_32f,                    // pixel function pointer
                                                                    output);      
                            } else if (destinationPixelFormat == PrPixelFormat_VUYA_4444_32f) {
                                    // Premiere doesn't support IterateFloatSuite1, so we've rolled our own
                                    IterateFloat(   in_dataP,
                                                                    0,                                                              // progress base
                                                                    linesL,                                                 // progress final
                                                                    &params[NOISE_INPUT]->u.ld,             // src
                                                                    (void*)&niP,                                    // refcon - your custom data pointer
                                                                    FilterImageVUYA_32f,                    // pixel function pointer
                                                                    output);      
                            } else {
                                    //      Return error, because we don't know how to handle the specified pixel type
                                    return PF_Err_UNRECOGNIZED_PARAM_TYPE;
                            err = AEFX_ReleaseSuite (
                                    in_dataP,
                                    out_data,
                                    kPFPixelFormatSuite,
                                    kPFPixelFormatSuiteVersion1,
                                    NULL);
    I removed some code related to errors.
    Yet even if I set the Render and the Sequence (Preview) to 32 Bit (Max Bit Depth) it still always selects 8 Bit Format.
    I don't understand how the pipeline should work in order to enable support for 32 Bit processing.
    Any assistance?
    How exactly does it work?

    Hi Royi,
    The pixel format requested depends on the action that triggered the render.  I went back to CS6 and I'm seeing the 32-bit YUV path taken, although for draft renders while scrubbing around in the timeline the 8-bit RGB path is used.

  • CS6 crashes every time I open a RAW file as 16-bit

    Greetings all,
    I'm feeling very frustrated and I hope you can help. Every time I open a RAW file in CS6 (at 16-bit depth) it crashes, so I am unable to open RAW files after editing in Camera Raw. The RAW files open OK in Camera Raw and I can edit them, but then opening them from Camera Raw into Photoshop (by clicking on Open Image) causes a crash.
    Further background: I also had CS3 (the entire suite concurrently) installed. Thinking that might be a problem, I removed all of CS3 using the Adobe Uninstaller tool to see if that would help. Then I re-installed Photoshop CS6 from scratch and then downloaded and installed all updates.
    No help at all. CS6 still crashes every time I open a RAW file if I open it as 16-bit. If I open a .DNG file directly from Photshop (without using Camera Raw), that crashes too. I'm opening them into Photoshop using Adobe RGB (1998), 16-bit, 3456 x 5184 (17.9 MP), 300DPI.
    Thinking it might be a memory issue, I checked free RAM using Activity Monitor and I see it has 3.4 GB free with both CS6 and Bridge open, which should be plenty. However I do see by looking in the crash report from the system log (portion included below), that it appears the crash may have resulted from a failed MALLOC (memory allocation).
    Here's the kicker: oddly, when I open the file up in Photoshop at 8-bit depth, it works fine. What the heck is going on?
    Please help!
    Thanks.
    Details:
    Software and Hardware versions:
    Photoshop CS6 Extended (13.0.4 x64)
    Camera Raw 7.3
    Bridge CS6 (5.0.2.4)
    Mac OS X 10.8.2 (Mountain Lion)
    8 GB RAM 27" iMac with 2.66 Ghz Intel Core i5
    ATI Radeon HD 4850 graphics card with 512 MB RAM
    1 Terabyte HD with 750GB free
    Canon EOS Rebel T3i
    Canon EF75-300mm f/4-5.6 Lense
    Top portion of System Log file with the crash data in it:
    Process:         Adobe Photoshop CS6 [4110]
    Path:            /Applications/Adobe Photoshop CS6/Adobe Photoshop CS6.app/Contents/MacOS/Adobe Photoshop CS6
    Identifier:      com.adobe.Photoshop
    Version:         13.0.4 (13.0.4.28)
    Code Type:       X86-64 (Native)
    Parent Process:  launchd [147]
    User ID:         501
    Date/Time:       2013-03-08 20:28:24.471 -0800
    OS Version:      Mac OS X 10.8.2 (12C60)
    Report Version:  10
    Crashed Thread:  0  Dispatch queue: com.apple.main-thread
    Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
    Exception Codes: KERN_INVALID_ADDRESS at 0x0000000131df7164
    VM Regions Near 0x131df7164:
        mapped file            0000000131dc9000-0000000131df7000 [  184K] rw-/rwx SM=COW  /private/var/folders/*
    -->
        MALLOC_TINY            0000000131e00000-0000000131f00000 [ 1024K] rw-/rwx SM=COW 

    Chris, OK the complete crash log is at http://pastebin.com/tZ7wummp
    Thanks.

  • Why does Lightroom (and Photoshop) use AdobeRGB and/or ProPhoto RGB as default color spaces, when most monitors are standard gamut (sRGB) and cannot display the benefits of those wider gamuts?

    I've asked this in a couple other places online as I try to wrap my head around color management, but the answer continues to elude me. That, or I've had it explained and I just didn't comprehend. So I continue. My confusion is this: everywhere it seems, experts and gurus and teachers and generally good, kind people of knowledge claim the benefits (in most instances, though not all) of working in AdobeRGB and ProPhoto RGB. And yet nobody seems to mention that the majority of people - including presumably many of those championing the wider gamut color spaces - are working on standard gamut displays. And to my mind, this is a huge oversight. What it means is, at best, those working this way are seeing nothing different than photos edited/output in sRGB, because [fortunately] the photos they took didn't include colors that exceeded sRGB's real estate. But at worst, they're editing blind, and probably messing up their work. That landscape they shot with all those lush greens that sRGB can't handle? Well, if they're working in AdobeRGB on a standard gamut display, they can't see those greens either. So, as I understand it, the color managed software is going to algorithmically reign in that wild green and bring it down to sRGB's turf (and this I believe is where relative and perceptual rendering intents come into play), and give them the best approximation, within the display's gamut capabilities. But now this person is editing thinking they're in AdobeRGB, thinking that green is AdobeRGB's green, but it's not. So any changes they make to this image, they're making to an image that's displaying to their eyes as sRGB, even if the color space is, technically, AdobeRGB. So they save, output this image as an AdobeRGB file, unaware that [they] altered it seeing inaccurate color. The person who opens this file on a wide gamut monitor, in the appropriate (wide gamut) color space, is now going to see this image "accurately" for the first time. Only it was edited by someone who hadn't seen it accurately. So who know what it looks like. And if the person who edited it is there, they'd be like, "wait, that's not what I sent you!"
    Am I wrong? I feel like I'm in the Twilight Zone. I shoot everything RAW, and I someday would love to see these photos opened up in a nice, big color space. And since they're RAW, I will, and probably not too far in the future. But right now I export everything to sRGB, because - internet standards aside - I don't know anybody who I'd share my photos with, who has a wide gamut monitor. I mean, as far as I know, most standard gamut monitors can't even display 100% sRGB! I just bought a really nice QHD display marketed toward design and photography professionals, and I don't think it's 100. I thought of getting the wide gamut version, but was advised to stay away because so much of my day-to-day usage would be with things that didn't utilize those gamuts, and generally speaking, my colors would be off. So I went with the standard gamut, like 99% of everybody else.
    So what should I do? As it is, I have my Photoshop color space set to sRGB. I just read that Lightroom as its default uses ProPhoto in the Develop module, and AdobeRGB in the Library (for previews and such).
    Thanks for any help!
    Michael

    Okay. Going bigger is better, do so when you can (in 16-bit). Darn, those TIFs are big though. So, ideally, one really doesn't want to take the picture to Photoshop until one has to, right? Because as long as it's in LR, it's going to be a comparatively small file (a dozen or two MBs vs say 150 as a TIF). And doesn't LR's develop module use the same 'engine' or something, as ACR plug-in? So if your adjustments are basic, able to be done in either LR Develop, or PS ACR, all things being equal, choose to stay in LR?
    ssprengel Apr 28, 2015 9:40 PM
    PS RGB Workspace:  ProPhotoRGB and I convert any 8-bit documents to 16-bit before doing any adjustments.
    Why does one convert 8-bit pics to 16-bit? Not sure if this is an apt comparison, but it seems to me that that's kind of like upscaling, in video. Which I've always taken to mean adding redundant information to a file so that it 'fits' the larger canvas, but to no material improvement. In the case of video, I think I'd rather watch a 1080p movie on an HD (1080) screen (here I go again with my pixel-to-pixel prejudice), than watch a 1080p movie on a 4K TV, upscaled. But I'm ready to be wrong here, too. Maybe there would be no discernible difference? Maybe even though the source material were 1080p, I could still sit closer to the 4K TV, because of the smaller and more densely packed array of pixels. Or maybe I only get that benefit when it's a 4K picture on a 4K screen? Anyway, this is probably a different can of worms. I'm assuming that in the case of photo editing, converting from 8 to 16-bit allows one more room to work before bad things start to happen?
    I'm recent to Lightroom and still in the process of organizing from Aperture. Being forced to "this is your life" through all the years (I don't recommend!), I realize probably all of my pictures older than 7 years ago are jpeg, and probably low-fi at that. I'm wondering how I should handle them, if and when I do. I'm noting your settings, ssprengel.
    ssprengel Apr 28, 2015 9:40 PM
    I save my PS intermediate or final master copy of my work as a 16-bit TIF still in the ProPhotoRGB, and only when I'm ready to share the image do I convert to sRGB then 8-bits, in that order, then do File / Save As: Format=JPG.
    Part of the same question, I guess - why convert back to 8-bits? Is it for the recipient?  Do some machines not read 16-bit? Something else?
    For those of you working in these larger color spaces and not working with a wide gamut display, I'd love to know if there are any reasons you choose not to. Because I guess my biggest concern in all of this has been tied to what we're potentially losing by not seeing the breadth of the color space we work in represented while making value adjustments to our images. Based on what several have said here, it seems that the instances when our displays are unable to represent something as intended are infrequent, and when they do arise, they're usually not extreme.
    Simon G E Garrett Apr 29, 2015 4:57 AM
    With 8 bits, there are 256 possible values.  If you use those 8 bits to cover a wider range of colours, then the difference between two adjacent values - between 100 and 101, say - is a larger difference in colour.  With ProPhoto RGB in 8-bits there is a chance that this is visible, so a smooth colour wedge might look like a staircase.  Hence ProPhoto RGB files might need to be kept as 16-bit TIFs, which of course are much, much bigger than 8-bit jpegs.
    Over the course of my 'studies' I came across a side-by-side comparison of either two color spaces and how they handled value gradations, or 8-bit vs 16-bit in the same color space. One was a very smooth gradient, and the other was more like a series of columns, or as you say, a staircase. Maybe it was comparing sRGB with AdobeRGB, both as 8-bit. And how they handled the same "section" of value change. They're both working with 256 choices, right? So there might be some instances where, in 8-bit, the (numerically) same segment of values is smoother in sRGB than in AdobeRGB, no? Because of the example Simon illustrated above?
    Oh, also -- in my Lumix LX100 the options for color space are sRGB or AdobeRGB. Am I correct to say that when I'm shooting RAW, these are irrelevant or ignored? I know there are instances (certain camera effects) where the camera forces the shot as a jpeg, and usually in that instance I believe it will be forced sRGB.
    Thanks again. I think it's time to change some settings..

  • How to keep color when switching from RGB to Indexed Color Mode?

    I'm working in Photoshop CS6
    In Photoshop I've designed a custom crop screen for use in Magic Lantern (a program that runs with Canon cameras)   The project is in 8bit RGB and to be able to use it (in Magic Lantern) I have to change it to Indexed Color (using a an .act file provided fom them) Then I have to save it as a .bmp.  When I chnage from RGB to Indexed color (loading the .act file) the color in my Photoshop project turns to grey.
    I haven't worked with Indexed colors or changing the color mode etc.
    My question is... how can I keep the color (or a color similar, compatible color) to what I created in RGB when I convert it to Indexed Color using the .act file?
    Any suggestions?
    Thanks

    Attached are 4 jpgs:
    - The Crop Screen as it appears in the PSD file (RGB color/8-bit)
    - The Index Color .act file that I'm loading
    - The settings after I have loaded the act file
    - the Crop Screen with the blue color missing

  • Rendering as 10-bit/12-bit color (JPEG 2000?)

    I'm not trying to create a digital cinema package (DCP) per se, but I've got a few questions related to rendering in bit depths greater than 8-bit.
    NOTE:  The digital cinema standard (PDF link, see page 31) for package distribution is JPEG 2000 files with 12-bit color (in XYZ color space)...packaged into the MXF container format.
    1.  I'm wondering if it's possible to render to any 10-bit or 12-bit color format within After Effects.  Let's assume my source is a 16bpc or 32bpc comp, and I render it to the QuickTime container and select JPEG2000 or one of the other variants.  None of them seems to go above "millions of colors", or 8-bit.  (The one that has the option of "millions of colors plus" still only renders to planar 8-bit [YUV 4:2:0] when I inspect its streams and metadata.)
    2.  In the QuickTime container list, what are "Motion JPEG-A" and "Motion JPEG-B"?  These aren't standards with which I'm familiar, and I can't seem to find any detail in the documentation as to what these actually are.  (In all my tests, they're limited to 8-bit color.)
    3.  Is the JPEG 2000 codec that's available via QuickTime the same JPEG 2000 codec that's literally the full ISO/IEC 15444 or SMPTE 429-4 standard, or some crippled bits-and-pieces version?
    Obviously, I can render to TIFF or OpenEXR sequences in 16bpc or full float...I was just wondering if it was possible to get 10-bit or 12-bit color in a standard container via After Effects CC or Premiere Pro CC (via Media Encoder CC). 
    I see the "render at maximum bit depth" option in Premiere Pro, but I've never found a format/container that would output anything higher than 8-bit color...even with 16bpc or 32bpc input media.
    Thanks for your help and insight.

    If you want highter bit depth J2K, you have to render to image sequences. The baseline QT implementation is from the stone age. Perhaps there's some commercial third-party Codec out there or if you have such hardware, you could use Blackmagic's, but off  the bat there is nothing usable in QT as far as I know.
    Mylenium

Maybe you are looking for

  • How Do I Set Up a Master Boot Record(MBR) for 3 Different OS's on 2 hdd's?

    I am considering putting three or more OS's on my computer. Would someone show me how to set up a master boot record to give me a choice between Windows XP Professional Edition, Solaris 10 6/06, Red Hat Linux 7 (Fedora Core 5), and possibly Knoppix 5

  • Resetting a computer

    Is there a way to restore your computer to the way it was when it was bought? I bought it in April of this year. Im having blue screens and I don't no how to fix it!

  • HP LaserJet 2600n poor print quality/streaking

    I am the original owner of an HP LJ 2600n with a poor print quality/streaking issue. The poor print quality occurs with any and all paper that I use (currently Hammerhill 20# stock) and occurred with Windows XP and Windows 7. The print cartidges are

  • Purchase order history in PO

    Hi everyone, Could you please say where i have the link to change the reference doc field which appears in PO history tab of PO.  Help says that it can have the reference number of vendor or customer.  Where do i change it. Siva

  • Syncing lyrics to iphone using itunes 9 problem

    hi, i am trying to sync my lyrics from itunes to my iphone. and the lyrics wont sync on. i know how to sync lyrics but this particular song has a permanent website thats posted on the lyric section saying "www.j-pop.cn" i can change the lyrics on te