10 bit render

I've just completed a 09'00" film made up of still jpeg images (zooming, panning etc) and graphics, which is destined to be put onto a DVD for showing on a domestic plasma TV.
Throughout the edit I have been working on a DV Pal timeline but now, because of the high res jpeg source images, I have been advised to upgrade the sequence to 10 bit 4.2.2 to ensure as high quality source QT film as possible to go into Compressor to make the mpeg2 for the DVD.
I have done this succesfully for another, shorter (02'30" if that is at all relevant) film made up of similar jpegs and graphics, but whenever I try to re-render the DV sequence for the longer film up to 10 bit 4.2.2, it stops about half way through with the error message:
"Codec not found. You may be using a compression type without the corresponding hardware card"
If anyone could suggest what's going on and how to fix it, I would be grateful.
Cheers,
Hamish
Intel iMac FCP 5.1.2   Mac OS X (10.4.8)  

Dear Jerry,
Thank you for your reply.
But.. can I not just change the Seq Settings from DV Pal to 10 bit 4.22 then blow the renders by turning off the view icon on the timeline? And then when I re-render it will all be 10 bit?
Also, I've just looked into the source stills a bit more and the one the render stops on is a .tif not a .jpeg. Would that make a difference do you think? Should all be well if I convert the .tif to .jpeg in graphic converter or Photoshop and then reconnect the media to the offending still?
Cheers,
Hamish

Similar Messages

  • How do you get 32-bit float effects to work with an alpha channel?

    I make a few bugs and lower thirds, but I'm having a lot of trouble getting 32-bit float effects to correctly export with transparency.
    When I make semitransparent animations with bright lighting effects, they look great in AE but I can't seem to find any way to bring them over to Final Cut and get them to look like they should.
    I've attached this example of a bug with a screenshot from AE on the left and from FCP on the right. The colors dim and the vibrant blue gradients get flattened into a dull greenish color.
    So far as I can tell, all my alpha and color management settings are correct. I've tried endless settings options and codec options to try to fix this, to no avail.
    You can create an extreme case as an example by creating a semitransparent shape, cranking the color way up to a saturated superwhite color, and moving it around with motion blur. The render looks pretty much just like a 8/16 bit render, and nothing like it did in your canvas.
    I have a feeling the way AE multiplies the superwhite color values with the alpha channel gets "lost in translation" when it is rendered to 4 8-bit channels. Is there any way to get it to look the way it does in the canvas?
    Thanks,
    Clint

    Thanks so much for your ideas. I'd already tried experimenting with a number of color management options, including 'Preserve RGB' and turning off 'Convert to Linear Light' but they didn't fix it. I had also already tried using the "None" option from the codec menu.
    I pulled it back into AE, and it looks the same as it did in FCP.
    I'm starting to think the issue has something to do with AE rendering to a file differently than the way it renders to the canvas. For example, here's a fast moving 50% opaque superwhite shape in the AE canvas:
    I rendered this using Animation + Alpha using Best settings and 32-bit float, and reimported it into AE:
    I think the answer may lie in the fact that the first image looks solid white in some places, even though it is 50% transparent. It's almost like the excessive RGB values "bleed" over to the alpha channel. The rendered image (probably correctly) looks grey because it represents white at 50% opacity.
    What I want to do though, is find a way to make my render look like my canvas. So it seems I need to do two things to the image:
    1. Find a way to increase opacity in areas of excessive white, so that they are not grey in the render.
    2. Find a way to "bake" the superwhite colors to a level below 100% white so that they show up in the render.
    I tried the "Alpha from Max Color" effect and this seems to get kind of close, but unfortunately it discards other parts of the alpha channel (like drop shadows).
    It might also be helpful to know that when I have AE output to an external NTSC monitor (via Intensity card) the monitor shows the exact same artifacts as the render.

  • Pixelated FCP Render of linked .motn file using LiveFont

    I have a clip that I've used the "Send To" feature to open in Motion. While there, I added two bits of text using LiveFonts (blueprint and timer).
    It looks crisp as can be in Motion. I save it, Apple-Tab back into FCP and get the momentary "Media Offline" while it updates.
    In the canvas, the frames look every bit as good and sharp as in Motion. I then render the sequence and the results are hideous... but only on the "timer" font. I had the same results using the "flip counter" LiveFont. The "blueprint" font, however, looks fine before and after the render.
    The same fonts rendered in LiveType look gorgeous. I can overlay them in Motion just fine. I can use them directly in Motion and export a lossless quicktime (Animation format) and they look fine.
    If I use Send To, though, to take advantage of the convenient workflow, I'm required to render in FCP and the results are hideous.
    Does anyone know if this is a common problem or if there is some kind of workaround?
    Here is a link to a sample JPEG showing the canvas window and the rendered output in FCP:
    http://www.halfpress.com/images/font-problems.jpg
    Keep in mind that these are two qualities are visible within FCP side by side using the same data. I choose a frame in my crappy rendered output, hit f to match-frame and am presented with pristine on the left and hideous on the right... but again, only for one of the fonts.
    Appreciate any help!
    - Aaron

    Thanks. Actually, that seemed to do it... or very close. I also tried the Animation codec and it appears to be an even more precise match to the Motion source than the uncompressed 8-bit.
    I had ruled that out initially because exports I did from Motion itself to the DV format looked semi-cruddy, but nowhere NEAR as bad as what FCP was rendering. That made me start thinking FCP was internally hosing the source when it renders.
    I will say that FCP is rendering the drop shadows differently from the Motion source than Motion is previewing and rendering internally.
    So is this procedure of running the finished sequence through a new timeline with a better codec pretty standard?
    Another thing I had tried before posting my original question was exporting my working DV sequence with Compressor to a better higher quality format to see if it improved. I was doing this to see if only the editing process looked bad and maybe it would internally re-render through compressor to a higher quality. It seems it just renders from the nasty DV source even when going through compressor.
    Now assume I need to drop it in a new sequence within FCP using Animation or Uncompressed 8-bit, render, and THEN export to compressor?
    Trying to ascertain if this is a hack/workaround or legitimately "the way things should be".
    Thanks!
    - aaron

  • Kolor key doesn't work after render

    After rendering the color key is off. It's perfect on the canvas before render. AS soon as I hit render, it goes back to the blues screen un-keyed.
    What is wrong?

    If you're rendering in 10 bit, try changing it to 8 bit and see if that works. There are some bugs in 10 bit render.
    RT is RGB, so you might not get the error then...
    Patrick

  • Using Windows Server 2008 with After Effects as Render Slave?

    Hi guys,
         We have a fair number of servers that run Windows Server 2008 64bit.  We wanted to further utilize these boxes and wanted to know if it's possible to use them as render slaves with After Effects?  Anybody doing this?  We couldn't find anything for system requirements on Windows Server 2008.
    Thanks for everyone's input.
    Jason

    Hi Todd,
         Another question for ya.  Does it matter if we have some machines still running 32-bit render slaves for after effects and implement a master 64-bit CS5 after effects Master?  Can you mix it up OR do all the machines have to be upgraded to the same version, eeven though they are just render slaves?  We are just doing some thinking on reconfiguration and what to do so it's right the first time.
    Thanks for the input.
    Jason

  • Help Understanding Codecs for V

    My general problem is that when using Avaya VoIP client, my outgoing voice becomes garbled to the end user after a few minutes of conversation. My CPU, an Intel Pentium 4 (3GHz) also kicks into 50% utilization during a conversion, although when the conversations starts, it usually sits at about 7%. I have a high speed 5M download 52 K upload internet line. My sound card is an Audigy 2 ZS. I have eliminated obvious, like insuring other programs are not taking up CPU. I have 2 GB of high speed RAM also. I have also verified that the utilization is from the process in task manager for the IP phone client.
    I am wondering since the CPU is being hammered, if I am using a software codec for the VoIP communication link. My device manager shows 3 available audio codecs. I assume that a sound card has its own embedded digital signal processing capabilities that should be capable of doing audio compression. So, my question for any hardware sound blaster guru out there is: Does the sound card do G.729 or G.7 audio compression, or is audio compression for VoIP done by a software codec?
    Any comments regarding how audio codecs work in regards to VoIP will be appreciated.
    Thanks,
    gerryj

    Fuzzy Barsik wrote:
    With more than high probability common people won't be able to playback anything in MXF container.
    So which would format would YOU recommend I hand them for a playable safe-keeping format?  (some are on PC and some are on mac if that makes any difference)
    Fuzzy Barsik wrote:
    If you properly graded your 8-bit footages, i.e. in 32-bit working space (in terms of PrPro that means enabling 'Maximum Bit Depth' in Sequence Settings), 10-bit render (for which you need to check 'Maximim Bit Depth' in the Export Settings dialog, unless you render into DNxHD in MOV container) preserves quality better than 8-bit. However, if rendering to 8-bit doesn't result in colour bending or something like that, common people will hardly see any difference.
    That's interesting. I've never checked that Max Bit Depth box in Sequence Settings. I don't do much grading. Color correction, yes, but nothing too crazy or with tons of layers. I figured checking that would slow things down and speed is paramount. On occasion, I'll noticed some color banding with my footage (like in the gradient of the sky) but not often and no one ever complains.  I used to check Max Bit Depth in export settings not knowing what I was doing but it sounded important so I did...but then I stopped when I had THIS>> (http://forums.adobe.com/message/4773556)<  issue with one of my exports (I changed many settings to get it resolved, and it only went away when max bit depth was unchecked (granted, now that you mention it, I didn't have max bit depth checked during editing in the sequence settings...hmmm).

  • Putting a HDV video onto DVD to watch on a TV.

    I filmed some friends performing at a pub. My camera is a Sony HDR-HC9 and I set it to HDV 1080i.
    To edit I have an iMac, Intel Core i3, Mem 4GB, ATI Radeon HD 5670, 1TB HDD and I have FCP7.
    I used DVD Studio Pro4 to copy it onto a DVD, 7 times, but I can not get the right configuration so that it plays on the TV. What an I doing wrong?

    Are you reasonably familiar with FCP .. I will assume so for now .. ingest the HDV as ProRes 422 and create a sequence to match the clip properties and place that onto the timeline. I will leave out all the editing , let us assume you are content that you have done all you need to do (cut out or added any bits) render (not really necessary if sequence mated clip but good practise) and the go File > Export (command E) and using current settings save the file out.
    ctrl (right) click the file and open with compressor. In compressor use the template default of Create DVD. Change the two (.m2v and .ac3) file to be the same name eg pub_night.m2v and pub_night.ac3 and submit .. Now it should creatae a DVd but if not and as that is part of the scripting. If not you will have the two files (video .m2v and audio .ac3) oen DVDSP and do a basic dvd in there
    An'old' but relevant alterative .. similar with but no convert of HDV into ProRes (the convert into prores is really for a number of reasons but basically it is designed for editing better than HDV which is very resource intensive to edit natively)
    http://www.kenstone.net/fcphomepage/hdv_to_sddvd.html
    Message was edited by: JimKells - found the Ken Stone page

  • Firefox 6 running under Windows 7 64-bit does not render fonts correctly.

    The default sans-serif font for Firefox is Arial. Running Firefox 6 under Windows 7 32-bit, the font renders properly, but under 64-bit Win7, the font is too heavy, and bold or strong makes no difference. I run Win7 64-bit on a laptop computer and use Firefox 5 on it, and the fonts render properly. Possibly a Firefox 6/64-bit problem?

    Hi strangerland, don't give up hope -- I'm using Firefox 36.0.4 on Windows 7 right now.
    That said, I understand troubleshooting fatigue and when you find time to think about Firefox again, and have the opportunity to shut down and restart Windows, you might try testing in Windows Safe Mode with Networking, a mode that blocks some external programs that can interfere with Firefox.
    * [http://windows.microsoft.com/en-us/windows/advanced-startup-options-including-safe-mode#1TC=windows-7 Advanced startup options (including safe mode) - Windows Help]
    * [http://windows.microsoft.com/en-us/windows/start-computer-safe-mode#start-computer-safe-mode=windows-7 Start your computer in safe mode - Windows Help]

  • FCP6 any problem using "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing

    I have a long and complex 1080p FCP6 project using ProRes442.  It is made up of mostly high resolution stills and some 1280i video clips.  Rendering has  laways been anightmare.  It takes extremely long and makes frequent mistakes whch have to be re-rendered.   Just today, I discovered the option of selecting  "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing.  The rendering time is cut down to a fraction and even on a large HD monitor I can tell no difference in quality.  I am getting ready to re-render the entire project in 8-bit and just wanted to check if changing to 8-bit would pose some problems and/or limitations that I'm not aware of.  This is not a broadcast or hollywood film thing. But it does represent my artwork and I burn it to bluray so i do want it to look good.  Lke I said, I can tell no difference between the 8-bit and 10-bit color depth with the naked eye.  Thank you all for all the help you have always given me with my many questions in the past.

    Unless you have a 10bit monitor (rare and very expensive) you can not see the difference as your monitor is 8 bit.
    10 bit is useful for compositing and color grading. Otherwise, 8 bit it fine for everything else.
    x

  • Maximum Bit Depth /Maximum Render Quality  Questions

    Maximum Bit Depth
    If my project contains high-bit-depth assets generated by high-definition camcorders, I was told to select Maximum Bit Depth because Adobe Premiere Pro uses all the color information in these assets when processing effects or generating preview files. I'm capturing HDV using the Matrox RTX-2 Hardware in Matrox AVI format.
    When I finally export my project using Adobe Media Encoder CS4, will selecting Maximum Bit Depth provide better color resolution once I post to Blu-ray format?
    Maximum Render Quality
    I was told that by using Maximum Render Quality, I maintain sharp detail when scaling from large formats to smaller formats, or from high-definition to standard-definition formats as well as maximizes the quality of motion in rendered clips and sequences. It also renders moving assets more sharply. It's my understanding that at maximum quality, rendering takes more time, and uses more RAM than at the default normal quality. I'm running Vista 64 Bit with 8 GIGs of RAM so I'm hoping to take advantage of this feature.
    Will this also help to improve better resolution when I finally export my project using Adobe Media Encoder CS4 and post to Blu-ray format?
    Does it look like I have the specs to handle Maximum Bit Depth and Maximum Render Quality when creating a new HDV project with the support of the Matrox RTX 2 Hardware capturing in Matrox AVI format? See Below Specs.
    System Specs
    Case: Coolmaster-830
    Op System: Vista Ultima 64 Bit
    Edit Suite: Adobe Creative Suite 4 Production Premium Line Upgrade
    Adobe Premiere Pro CS 4.0.1 update before installing RT.X2 Card and 4.0 tools
    Performed updates on all Adobe Production Premium Products as of 03/01/2009
    Matrox RTX2 4.0 Tools
    Main Display: Dell 3007 30"
    DVI Monitor: Dell 2408WFP 24"
    MB: ASUS P5E3 Deluxe/WiFi-AP LGA 775 Intel X38
    Display Card: SAPPHIRE Radeon HD 4870 512MB GDDR5 Toxic ver.
    PS: Corsair|CMPSU-1000HX 1000W
    CPU: INTEL Quad Core Q9650 3G
    MEM: 2Gx4|Corsair TW3X4G1333C9DHXR DDR3 (8 Gigs Total)
    1 Sys Drive: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB
    Cache SATA 3.0Gb/s
    2 Raid 0: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s Using Intel's integrared Raid Controller on MB

    Just some details that i find useful on maximum render depth
    You really need it even with 8bit source files, when using heavy grading/multiple curves/vignettes. If after grading you see banding, go to sequence > sequence settings from the top menu and check "maximum bit depth (ignore the performance popup), then check again your preview (it will change in a second) to see if banding is still present in 32bit mode. If no banding, you must check it when exporting, if  banding is still there, change your grading, then uncheck it to continue with editing.
    Unfortunately Maximum bit depth exporting is extremely time-consuming, but can really SAVE YOUR DAY when facing artifacts after heavy grading, by completely or almost completely eliminating banding and other unwanted color distortions.
    Use it only for either small previews or the really final output.
    Best Regards.

  • PE 4 crashes on render, export and preview with Vista 64 bit

    I just installed my PE 4 on a new Windows Vista 64 bit machine with the i7 920 processor and 512 MB ATI Radeon video card.  PE 4 worked fine on my XP machine but I cannot get it to render, export or even let me preview on this new maching.  I have followed the advise of updating all drivers and software and uninstall and reinstalling with nothing running and I get the same error repeatedly - it will render and progress for about 5 seconds and then I get "Adobe Premiere Elements.exe has stopped working" and it closes.  The video clips are from a flash memory camera and the Presets set for widescreen - flash memory.  Everything works normally until I try to view or render the files.  Any suggestions?

    You say this is from a flash memory camcorder? Can  you be more specific?
    Webbie camcorders, like the Flip, may shoot in hi-def, but they also record to a non-standard codec that can cause some of the problems you're describing.

  • Bit Depth and Render Quality

    When you finally export media to some sort of media format via the encoder does the projects preview Bit Depth and Render Quality settings affect the output file?
    I know there is "Use Preview files" setting in the media exporter dialogue but I just want to be sure of what I am doing.

    Jeff's response is my perspective, as well, which is both backed up by my own tests and the official Adobe word.
    Exhibit A: My Tests
    That is DV footage with a title superimposed over it in a DV sequence, with a Gaussian blur effect (the Premiere accelerated one) applied to the title; all samples are from that sequence exported back to DV. This was to show the relative differences of processing between software and hardware MPE, Premiere export and AME queueing, and the effect of the Maximum Bit Depth and Maximum Render Quality options on export (not the sequence settings; those have no bearing on export).
    The "blooming" evident in the GPU exports is due to hardware MPE's linear color processing. I think it's ugly, but that's not the point here. Further down the line, you can see the effect of Maximum Bit Depth (and MRQ) on both software MPE and hardware MPE. I assume you can see the difference between the Maximum Bit Depth-enabled export and the one without. Bear in mind that this is 8-bit DV footage composited and "effected" and exported back to 8-bit DV. I don't understand what your "padding with zeroes" and larger file size argument is motivated by--my source files and destination files are the same size due to the DV codec--but it's plainly clear that Maximum Bit Depth has a significant impact on output quality. Similar results would likely be evident if I used any of the other 32-bit enabled effects; many of the color correction filters are 32-bit, and should exhibit less banding, even on something 8-bit like DV.
    Exhibit B: The Adobe Word
    This is extracted from Karl Soule's blog post, Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more. This section comes from Adobe engineer Steve Hoeg:
    1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 8-bit frame,
    apply the color corrector to the 8-bit frame to get another 8-bit frame,
    then write DV at 8-bit.
    2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DV at 8-bit. The color corrector working on the 32-bit
    blurred frame will be higher quality then the previous example.
    3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DPX at 10-bit. This will be still higher quality
    because the final output format supports greater precision.
    4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag.
    We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit
    frame, apply the color corrector to the 8-bit frame to get another
    8-bit frame, then write 10-bit DPX from 8-bit data.
    5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag.
    We will import the 10-bit DPX file, apply the blur to get an 32-bit
    frame, apply the color corrector to the 32-bit frame to get another
    32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.
    6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.
    7. A title with a gradient and a blur on a 10-bit monitor
    (with hardware acceleration enabled.) This will render the blur in
    32-bit, then display at 10-bit. The gradient should be smooth.
    Bullet #2 is pretty much what my tests reveal.
    I think the Premiere Pro Help Docs get this wrong, however:
    High-bit-depth effects
    Premiere Pro includes some video effects and transitions
    that support high-bit-depth processing. When applied to high-bit-depth
    assets, such as v210-format video and 16-bit-per-channel (bpc) Photoshop
    files, these effects can be rendered with 32bpc pixels. The result
    is better color resolution and smoother color gradients with these
    assets than would be possible with the earlier standard 8 bit per
    channel pixels. A 32-bpc badge appears
    to the right of the effect name in the Effects panel for each high-bit-depth
    effect.
    I added the emphasis; it should be obvious after my tests and the quote from Steve Hoeg that this is clearly not the case. These 32-bit effects can be added to 8-bit assets, and if the Maximum Bit Depth flag is checked on export, those 32-bit effects are processed as 32-bit, regardless of the destination format of the export. Rendering and export/compression are two different processes altogether, and that's why using the Maximum Bit Depth option has far more impact than "padding with zeroes." You've made this claim repeatedly, and I believe it to be false.
    Your witness...

  • Turning on Render at Maximum Bit Depth and Maximum Render Quality crashes render every time

    I've tried a few times to render an H264 version of my Red media project with Maximum Bit Depth and Maximum Render Quality.  Premiere crashes every time.  I have GPUs enabled. Are people using these settings with Red media and successfully rendering?

    To answer your specific question did you see the tooltip?
    I beleive it allows for 32-bit processing (16-bit if unchecked). Per the project settings help file at http://helpx.adobe.com/premiere-elements/using/project-settings-presets.html
    Maximum Bit Depth
    Allows Premiere Elements to use up to 32‑bit processing, even if the project uses a lower bit depth. Selecting this option increases precision but decreases performance.
    The help file for export is somewhat less informative about what it actually does but does point out that it is the color bit depth - http://helpx.adobe.com/media-encoder/using/encode-export-video-audio.html
    (Optional) Select Use Maximum Render Quality or Render At Maximum Bit Depth. Note:  Rendering at a higher color bit depth requires more RAM and slows rendering substantially.
    In practice the simplest suggestion is to export twice - once with / once without the setting and compare the time taken and perceived quality.
    Cheers,
    Neale
    Insanity is hereditary, you get it from your children
    If this post or another user's post resolves the original issue, please mark the posts as correct and/or helpful accordingly. This helps other users with similar trouble get answers to their questions quicker. Thanks.

  • Maximum bit depth-maximum render quality when dynamic linking

    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?

    Hi jbach2,
    I understand your confusion.  I'm like that most of the time I'm working. *chuckle*  The two settings you mentioned are two completely different parameters affecting (or is it effecting) your video. You do not need to enable them within the sequence itself unless you want to preview video on you program monitor at the highest quality.  I personally don't recommend it, as it's a tremendous resource hog, (the program even warns you when you try to click them) and unessecary for improving final output.  Again, do not enable these options in your sequence settings if you are only wanting a high quality export. Doing so will greatly reduce your editing performance unless you have a high-end system. ...and even then I don't think its worth it unless you're editing on a huge screen with a Director who wants to see everything at a maximum quality during the edit process.
    Keeping it simple...
    Resizing your final output video? Use Maximum bit depth.
    Starting or working with high bitdepth sources? Use Max Bit Depth.
    When/where do I enable these? In the AME only. ^_^
    Why?:
    Enabling the Max bit and Max render only needs to be done when you are exporting.  They both serve different functions. 
    Max Render aids in the scaling/conversion process only.  My understanding is that you never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video.  For example, when rendering a 1080p timeline out to a 480p file format, you'll want to use MRQ to ensure the best scaling with the least amount of artifacts and aliasing.  If you're exporting at the same size you're working with, DON'T enable MRQ.  It will just cost you time and CPU. Its only function is to do a high quality resizing of your work.
    Maximum bit depth increases the color depth that your video is working with and rendering to.  If you're working with video that has low color depth, then I don't believe it will matter.  However, if you're working with 32 bit color on your timeline in PPro and/or After Effects, using lots of graphics, high contrast values, or color gradients, you may want to enable this option. It ultimately depends on the color depth of your source material.
    The same applies to After Effects.
    Create something in AE like a nice color gradient.  Now switch the same project between 8,16,32 bit depth, and you will see a noticable difference in how the bit depth effects your colors and the smoothness of the gradient.
    Bit depth effects how different plugins/effects change your overall image.  Higher depth means more colors to work with (and incidentally, more cpu you need)
    Just remember that "DEPTH" determines how many colors you can "fill your bucket with" and "QUALITY" is just that, the quality of your "resize".
    http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html
    Check out this adobe blog for more info on color depth ^_^  Hope that helps!
    ----a lil excerpt from the blog i linked to above---
    Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.
    Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32″ icons on some of the effects in Premiere Pro CS5.
    jbach2 wrote:
    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?
    Message was edited by: SnJK

  • 16 bit linear footage bleached out after render

    hey there,
    i got a linear 16-bit multipass-layered psd sequence which i imported to after effects.
    the ae-project is set to 16 bpc working space "sRGB" and "linearize workflow" on.
    it looks great in after effects. i even got rid of extreme banding in ram preview (pref/previews/col-man to "more accurate").
    but when i render my tweaked comp to quicktime, it appears extremely washed out:
    i tried animation codec as well as h264, both in 100% quality.
    only when i checked "preserve RGB" in the color management of the output module, it was not washed out. instead it looked somehow too dark. much better then the bleached version, but still not right.
    what do i do wrong?
    tyvm!!
    edit: sometimes only the first few frames are "washed out" and the rest is fine... o_O

    Footage looks and sounds great (sync is good) in iMovie 6 ... such as trying the same footage freshly inported into Final Cut Exp just to see if that would effect the outcome...nada!
    Since you posted in the iMovie '08 area, specifically mention importing to both iMovie '06 and Final Cut, and specifically state the iMovie '06 sync was "good" when imported/captured, my initial assumption would be that you problem is not one of source "sampling." Basically, if the DV content is "in sync" on the camera and "in sync" when captured by one or more editors, then it would appear that the camcorder and import modules are both working correctly as this is when you should first see any sampling problem show up.
    Since you indicate the final product -- the DVD -- is out of sync, then I would look at the intermediate files next to see if they are out of sync. If so, then you have a problem in the file output phase of your work flow and if not, then the problem is likely in the conversion/muxing of content in iDVD which could be the result of a bad install, update, or settings in applications or core system. Depending on your work flow, I would try opening either the "Media Browser", "Shared iDVD", or exported file in the QT Player and see if it displays the same synchronization problem as the final DVD. If not, would normally assume the data being received by iDVD is good which, as I indicated above, would point to a problem in converting the data to MPEG-2 and "muxing" it properly with the linear PCM (AIFF) converted audio. Have you tried inspecting the VOB file info in an application like VLC or MPEG Streamclip to see if anything is obviously askew or tried burning the DVD with an alternative application like Toast?

Maybe you are looking for