Bit Depth and Render Quality

When you finally export media to some sort of media format via the encoder does the projects preview Bit Depth and Render Quality settings affect the output file?
I know there is "Use Preview files" setting in the media exporter dialogue but I just want to be sure of what I am doing.

Jeff's response is my perspective, as well, which is both backed up by my own tests and the official Adobe word.
Exhibit A: My Tests
That is DV footage with a title superimposed over it in a DV sequence, with a Gaussian blur effect (the Premiere accelerated one) applied to the title; all samples are from that sequence exported back to DV. This was to show the relative differences of processing between software and hardware MPE, Premiere export and AME queueing, and the effect of the Maximum Bit Depth and Maximum Render Quality options on export (not the sequence settings; those have no bearing on export).
The "blooming" evident in the GPU exports is due to hardware MPE's linear color processing. I think it's ugly, but that's not the point here. Further down the line, you can see the effect of Maximum Bit Depth (and MRQ) on both software MPE and hardware MPE. I assume you can see the difference between the Maximum Bit Depth-enabled export and the one without. Bear in mind that this is 8-bit DV footage composited and "effected" and exported back to 8-bit DV. I don't understand what your "padding with zeroes" and larger file size argument is motivated by--my source files and destination files are the same size due to the DV codec--but it's plainly clear that Maximum Bit Depth has a significant impact on output quality. Similar results would likely be evident if I used any of the other 32-bit enabled effects; many of the color correction filters are 32-bit, and should exhibit less banding, even on something 8-bit like DV.
Exhibit B: The Adobe Word
This is extracted from Karl Soule's blog post, Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more. This section comes from Adobe engineer Steve Hoeg:
1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We
will import the 8-bit DV file, apply the blur to get an 8-bit frame,
apply the color corrector to the 8-bit frame to get another 8-bit frame,
then write DV at 8-bit.
2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We
will import the 8-bit DV file, apply the blur to get an 32-bit frame,
apply the color corrector to the 32-bit frame to get another 32-bit
frame, then write DV at 8-bit. The color corrector working on the 32-bit
blurred frame will be higher quality then the previous example.
3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We
will import the 8-bit DV file, apply the blur to get an 32-bit frame,
apply the color corrector to the 32-bit frame to get another 32-bit
frame, then write DPX at 10-bit. This will be still higher quality
because the final output format supports greater precision.
4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag.
We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit
frame, apply the color corrector to the 8-bit frame to get another
8-bit frame, then write 10-bit DPX from 8-bit data.
5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag.
We will import the 10-bit DPX file, apply the blur to get an 32-bit
frame, apply the color corrector to the 32-bit frame to get another
32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.
6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.
7. A title with a gradient and a blur on a 10-bit monitor
(with hardware acceleration enabled.) This will render the blur in
32-bit, then display at 10-bit. The gradient should be smooth.
Bullet #2 is pretty much what my tests reveal.
I think the Premiere Pro Help Docs get this wrong, however:
High-bit-depth effects
Premiere Pro includes some video effects and transitions
that support high-bit-depth processing. When applied to high-bit-depth
assets, such as v210-format video and 16-bit-per-channel (bpc) Photoshop
files, these effects can be rendered with 32bpc pixels. The result
is better color resolution and smoother color gradients with these
assets than would be possible with the earlier standard 8 bit per
channel pixels. A 32-bpc badge appears
to the right of the effect name in the Effects panel for each high-bit-depth
effect.
I added the emphasis; it should be obvious after my tests and the quote from Steve Hoeg that this is clearly not the case. These 32-bit effects can be added to 8-bit assets, and if the Maximum Bit Depth flag is checked on export, those 32-bit effects are processed as 32-bit, regardless of the destination format of the export. Rendering and export/compression are two different processes altogether, and that's why using the Maximum Bit Depth option has far more impact than "padding with zeroes." You've made this claim repeatedly, and I believe it to be false.
Your witness...

Similar Messages

  • Maximum bit depth-maximum render quality when dynamic linking

    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?

    Hi jbach2,
    I understand your confusion.  I'm like that most of the time I'm working. *chuckle*  The two settings you mentioned are two completely different parameters affecting (or is it effecting) your video. You do not need to enable them within the sequence itself unless you want to preview video on you program monitor at the highest quality.  I personally don't recommend it, as it's a tremendous resource hog, (the program even warns you when you try to click them) and unessecary for improving final output.  Again, do not enable these options in your sequence settings if you are only wanting a high quality export. Doing so will greatly reduce your editing performance unless you have a high-end system. ...and even then I don't think its worth it unless you're editing on a huge screen with a Director who wants to see everything at a maximum quality during the edit process.
    Keeping it simple...
    Resizing your final output video? Use Maximum bit depth.
    Starting or working with high bitdepth sources? Use Max Bit Depth.
    When/where do I enable these? In the AME only. ^_^
    Why?:
    Enabling the Max bit and Max render only needs to be done when you are exporting.  They both serve different functions. 
    Max Render aids in the scaling/conversion process only.  My understanding is that you never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video.  For example, when rendering a 1080p timeline out to a 480p file format, you'll want to use MRQ to ensure the best scaling with the least amount of artifacts and aliasing.  If you're exporting at the same size you're working with, DON'T enable MRQ.  It will just cost you time and CPU. Its only function is to do a high quality resizing of your work.
    Maximum bit depth increases the color depth that your video is working with and rendering to.  If you're working with video that has low color depth, then I don't believe it will matter.  However, if you're working with 32 bit color on your timeline in PPro and/or After Effects, using lots of graphics, high contrast values, or color gradients, you may want to enable this option. It ultimately depends on the color depth of your source material.
    The same applies to After Effects.
    Create something in AE like a nice color gradient.  Now switch the same project between 8,16,32 bit depth, and you will see a noticable difference in how the bit depth effects your colors and the smoothness of the gradient.
    Bit depth effects how different plugins/effects change your overall image.  Higher depth means more colors to work with (and incidentally, more cpu you need)
    Just remember that "DEPTH" determines how many colors you can "fill your bucket with" and "QUALITY" is just that, the quality of your "resize".
    http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html
    Check out this adobe blog for more info on color depth ^_^  Hope that helps!
    ----a lil excerpt from the blog i linked to above---
    Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.
    Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32″ icons on some of the effects in Premiere Pro CS5.
    jbach2 wrote:
    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?
    Message was edited by: SnJK

  • Maximum Bit Depth /Maximum Render Quality  Questions

    Maximum Bit Depth
    If my project contains high-bit-depth assets generated by high-definition camcorders, I was told to select Maximum Bit Depth because Adobe Premiere Pro uses all the color information in these assets when processing effects or generating preview files. I'm capturing HDV using the Matrox RTX-2 Hardware in Matrox AVI format.
    When I finally export my project using Adobe Media Encoder CS4, will selecting Maximum Bit Depth provide better color resolution once I post to Blu-ray format?
    Maximum Render Quality
    I was told that by using Maximum Render Quality, I maintain sharp detail when scaling from large formats to smaller formats, or from high-definition to standard-definition formats as well as maximizes the quality of motion in rendered clips and sequences. It also renders moving assets more sharply. It's my understanding that at maximum quality, rendering takes more time, and uses more RAM than at the default normal quality. I'm running Vista 64 Bit with 8 GIGs of RAM so I'm hoping to take advantage of this feature.
    Will this also help to improve better resolution when I finally export my project using Adobe Media Encoder CS4 and post to Blu-ray format?
    Does it look like I have the specs to handle Maximum Bit Depth and Maximum Render Quality when creating a new HDV project with the support of the Matrox RTX 2 Hardware capturing in Matrox AVI format? See Below Specs.
    System Specs
    Case: Coolmaster-830
    Op System: Vista Ultima 64 Bit
    Edit Suite: Adobe Creative Suite 4 Production Premium Line Upgrade
    Adobe Premiere Pro CS 4.0.1 update before installing RT.X2 Card and 4.0 tools
    Performed updates on all Adobe Production Premium Products as of 03/01/2009
    Matrox RTX2 4.0 Tools
    Main Display: Dell 3007 30"
    DVI Monitor: Dell 2408WFP 24"
    MB: ASUS P5E3 Deluxe/WiFi-AP LGA 775 Intel X38
    Display Card: SAPPHIRE Radeon HD 4870 512MB GDDR5 Toxic ver.
    PS: Corsair|CMPSU-1000HX 1000W
    CPU: INTEL Quad Core Q9650 3G
    MEM: 2Gx4|Corsair TW3X4G1333C9DHXR DDR3 (8 Gigs Total)
    1 Sys Drive: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB
    Cache SATA 3.0Gb/s
    2 Raid 0: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s Using Intel's integrared Raid Controller on MB

    Just some details that i find useful on maximum render depth
    You really need it even with 8bit source files, when using heavy grading/multiple curves/vignettes. If after grading you see banding, go to sequence > sequence settings from the top menu and check "maximum bit depth (ignore the performance popup), then check again your preview (it will change in a second) to see if banding is still present in 32bit mode. If no banding, you must check it when exporting, if  banding is still there, change your grading, then uncheck it to continue with editing.
    Unfortunately Maximum bit depth exporting is extremely time-consuming, but can really SAVE YOUR DAY when facing artifacts after heavy grading, by completely or almost completely eliminating banding and other unwanted color distortions.
    Use it only for either small previews or the really final output.
    Best Regards.

  • Turning on Render at Maximum Bit Depth and Maximum Render Quality crashes render every time

    I've tried a few times to render an H264 version of my Red media project with Maximum Bit Depth and Maximum Render Quality.  Premiere crashes every time.  I have GPUs enabled. Are people using these settings with Red media and successfully rendering?

    To answer your specific question did you see the tooltip?
    I beleive it allows for 32-bit processing (16-bit if unchecked). Per the project settings help file at http://helpx.adobe.com/premiere-elements/using/project-settings-presets.html
    Maximum Bit Depth
    Allows Premiere Elements to use up to 32‑bit processing, even if the project uses a lower bit depth. Selecting this option increases precision but decreases performance.
    The help file for export is somewhat less informative about what it actually does but does point out that it is the color bit depth - http://helpx.adobe.com/media-encoder/using/encode-export-video-audio.html
    (Optional) Select Use Maximum Render Quality or Render At Maximum Bit Depth. Note:  Rendering at a higher color bit depth requires more RAM and slows rendering substantially.
    In practice the simplest suggestion is to export twice - once with / once without the setting and compare the time taken and perceived quality.
    Cheers,
    Neale
    Insanity is hereditary, you get it from your children
    If this post or another user's post resolves the original issue, please mark the posts as correct and/or helpful accordingly. This helps other users with similar trouble get answers to their questions quicker. Thanks.

  • Bit Depth and Bit Rate

    I have a pre recorded mp3 VO. I placed it into a track bed in GB. Clients wants a compressed audio file with bit depth: 16 bit and bitrate: 128kps max, but recommends 96kbps. If I need to adjust the bit depth and bite rate, can I do it in GB? and if so, where? Thanks for any help.

    Please be aware that Bit Depth and Bit Rate are two completely different things!
    They belong to a group of buzz words that belong to Digital Audio and that is the field we are dealing with when using GarageBand or any other DAW. Some of those terms pop up even in iTunes.
    Digital Audio
    To better understand what they are and what they mean, here is a little background information.
    Whenever dealing with Digital Audio, you have to be aware of two steps, that convert an analog audio signal into a digital audio signal. These magic black boxes are called ADC (Analog Digital Converter) and “on the way back”, DAC (Digital Analog Converter).
    Step One: Sampling
    The analog audio (in the form of an electric signal like from an electric guitar) is represented by a waveform. The electric signal (voltage) changes up and down in a specific form that represents the “sound” of the audio signal. While the audio signal is “playing”, the converter measure the voltage every now and then. These are like “snapshots” or samples, taken at a specific time. These specific time intervals are determined by a “Rate”, it tells you how often per seconds something happens. The unit is Hertz [Hz] defined as “how often per seconds” or “1/s”. A Sample Rate of 48kHz means that the converter takes 48,000 Samples per second.
    Step Two: Quantize (or digitize)
    All these Samples are still analog, for example, 1.6Volt, -0.3Volt, etc. But this analog value now has to be converted into a digital form of 1s and 0s.This is done similar to quantizing a note in GarageBand. The value (i.e. the note) cannot have any position, it  has to be placed on a grid with specific values (i.e. 1/16 notes). The converter does a similar thing. It provides a grid of available numbers that the original measured Sample has to be rounded to (like when a note get shifted in GarageBand by the quantize command). This grid, the amount of available numbers, is called the Bit Depth. Other terms like Resolution or Sample Size are also used. A Bit Depth of 16bit allows for 65,535 possible values.
    So the two parameters that describe the quality of an Digital Audio Signal are the Sample Rate (“how often”) and the Bit Depth (“how fine of a resolution”). The very simplified rule of thumb is, the higher the Sample Rate, the higher the possible frequency, and the higher the Bit Depth, the higher the possible dynamic.
    Uncompressed Digital Audio vs. Compressed Digital Audio
    So far I haven’t mentioned the “Bit Rate” yet. There is a simple formula that describes the Bit Rate as the product of Sampel Rate and Bit Depth: Sample Rate * Bit Depth = Bit Rate. However, Bit Depth and how it is used (and often misused and misunderstood) has to do with Compressed Digital Audio.
    Compressed Digital Audio
    First of all, this has nothing to do with a compressor plugin that you use in GarageBand. When talking about compressed digital audio, we talk about data compression. This is a special form how to encode data to make the size of the data set smaller. This is the fascinating field of “perceptual coding” that uses psychoacoustic models to achieve that data compression. Some smart scientists found out that you can throw away some data in a digital audio signal and you wouldn’t even notice it, the audio would still sound the same (or almost the same). This is similar to a movie set. If you shoot a scene on a street, then you only need the facade of the buildings and not necessary the whole building.
    Although the Sample Rate is also a parameter of uncompressed digital audio, the Bit Depth is not. Instead, here is the Bit Rate used. The Bit Rate tells the encoder the maximum amount of bits it can produce per second. This determines how much data it has to throw away in order to stay inside that limit. An mp3 file (which is a compressed audio format) with a Bit Rate of 128kbit/s delivers a decent audio quality. Raising the Bit Rate to 256bit/s would increase the sound quality. AAC (which is technically an mp4 format) uses a better encoding algorithm. If this encoder is set to 128kbit/s, it produces a better audio quality because it is smarter to know which bits to throw away and which one to keep.
    Conclusion
    Whenever you are dealing with uncompressed audio (aiff, wav), the two quality parameters are Sample Rate [kHz] and Bit Depth [bit] (aka Resolution, aka Bit Size)
    Whenever you are dealing with compressed audio (mp3, AAC), the two quality parameters are Sample Rate [kHz] and Bit Rate [kbit/s]
    If you look at the Export Dialog Window in GarageBand, you can see that the Quality popup menu is different for mp3/AAC and AIFF. Hopefully you will now understand why.
    Hope that helps
    Edgar Rothermich
    http://DingDingMusic.com/Manuals/
    'I may receive some form of compensation, financial or otherwise, from my recommendation or link.'

  • Supported bit depth and sampling frequency

    Hi,
    Is it possible to determine at runtime what sampling frequencies and bit depths are available for audio capture -- other than by trial and error? It does not seem that way from the API, but maybe I missed something, so I decided to post a question here. The only thing I was able to find so far is that 8-bit PCM WAV is required to be supported by all devices that support audio at all.
    Thanks in advance for any info!
    Sergei

    Thanks for the info. I guess I was looking for something more specific - the exact bitrates and sample rates that Creative claims to support. Would you know where official and comprehensi've data can be had? There must be a tech spec somewhere.
    It is common these days in business to see a recording of, say, a conference call or seminar presentation at 32k bitrate/025Hz, or even 24k bitrate/8000Hz, posted to a company's website for download by those who could not be there, and MP3 players are increasingly used for their replay. Companies use low digitization rates because there is no need for hifi and the files are much smaller: less storage, faster download.
    I'd be surprized to think that Creative don't have compatibility with the standard range of rates offered by ubiquitous programs like Audacity and dBpower, the latter being one they themselves recommend!

  • Volume, bit depth, and quality - for the boffins

    hi y'all
    my understanding is that with digital sound that if levesl are too low you lose quality as the signal is not carried using all of the bits.
    THis is explained clearly in the book 'mxing with your mind' by michael paul stavrou using a 'photo of a skyscraper' analogy to represent audio clarity (focus) changing with volume (height of skyscraper)
    With digital the highest part of the skyscraper is most in focus , whereas in analogue the middle part of the skyscraper is most in focus
    3 questions:
    1) Is this understanding correct?
    2) If you have recorded levels that are low does adding gain or normalizing the files make any difference to the sound quality? - surely detail can't be added
    3) When you add gain to a quiet mix is there a difference in the quality of the outcome between the following approaches.
    1) normalizing the audio files for each track
    2) adding gain to each track in the mixer either by adjusting faders or adding a insert that increases gain
    3) adding gain at the mix output bus or master output fader?
    Would value anyone's views on this.
    best
    tommy banana

    my understanding is that with digital sound that if levesl are too low you lose quality as the signal is not carried using all of the bits.
    Not really.
    THis is explained clearly in the book 'mxing with your mind' by michael paul stavrou using a 'photo of a skyscraper' analogy to represent audio clarity (focus) changing with volume (height of skyscraper)\
    Unfortunately, this kind of analogy has misrepresented digital audio for a long time, and has caused all kinds of myths, like "digital audio is steppy or discontinuous" and "a higher sample rate gives you more resolution because the steps are smaller" etc etc.
    If you have recorded levels that are low does adding gain or normalizing the files make any difference to the sound quality?
    Yes, it makes it very very slightly worse.
    3) When you add gain to a quiet mix is there a difference in the quality of the outcome
    between the following approaches.
    1) normalizing the audio files for each track
    2) adding gain to each track in the mixer either by adjusting faders or adding a
    insert that increases gain
    3) adding gain at the mix output bus or master output fader?
    Normalising should really never be used. If all these processes are implemented correctly, there should be no practical difference between adding gain to all individual tracks, versus adding gain at the master fader.
    This wasn't the case some years ago with poorly implemented digital mixers, but in this day and age, pretty much everyone does it right.
    Bottom line, if you really want to understand this stuff it can be useful, but really, improving your song writing will have a much better positive effect on your music than worrying about which dither curve works the best and whether adding +1dB of gain is going to destroy your audio.

  • Internal handling of Bit Depth and Sample Rate

    I am wondering if iTunes allows accurate import of 48KHz/24bit files or whether it will convert to 44.1KHz/16bit as default. There is contradictory information in the Internet.
    Thanks.

    I just experimented on my machine, which is running iTunes 8.1. 48/24 WAV files can be added to iTunes. When adding to the library, it does not change them.
    However, if you use iTunes as a conversion tool, it cannot create 48/24 files. The highest it can go is 48/16. That may be the source of the seemingly contradictory information that you read.
    I did not try syncing the iPod. However, based on past problems with 48/16 files, I would not be surprised if the iPod has a problem with 48/24 files.

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

  • When do I need maximum render quality?

    Help me to understand it right.
    There is "maximum bit depth" and "maximum render quality".
    I only use maximum render quality if I downscale a project from hd to sd to get a better downscale?!
    I use maximum bit depth if I want to render effects in 10 Bit quality.
    I usually cut with XDCAM Files outputting it to disc - with this 8 Bit Files there is no require to render with maximum bit depth isn't it?
    Because when I output to XDCAM-Disc all files must be coded to 8 bit xdcam-mxf again right?

    with this 8 Bit Files there is no require to render with maximum bit depth isn't it?
    See this The Video Road blogpost on Understanding Color Processing. At the end of the article Steve Hoeg presents detailed explanation how the 'Maximum Bit Depth' flag works.
    See also this discussion on 'Maximum Render Quality'.
    Additional I don't unterstand right now, why Premiere is rendering my 50MBit Xdcam Files with 25 Mbits MPEG files.Is this only preview quality?
    If you're talking about preview files, then yes, unless you tick 'Use Previews' checkbox in the Export Settings dialog. By default PrPro utilises MPEG2 for rendering previews. You can change that while you're creating new sequence: in the New Settings dialog click Settings tab, choose 'Custom' from the Editing Mode drop down list, then you will be able to set Preview File Format and Codec in the Video Previews section. Now the question is whether you really want it? Rendering to a production codec will take longer, whereas rendering previews happens more often (if ever) than rendering final output...
    Tento wrote:
    No it's not necessary. Unless you want to make a color grading in 10bit, but that would be with a lossless codec like DNxHD.
    No, that's a delusion. See this The Video Road blogpost on Understanding Color Processing I mentioned earlier in my comment.

  • Creative Audigy 2 NX Bit Depth / Sample Rate Prob

    This is my first post to this form
    Down to business: I recently purchased a Creative Audigy 2 NX sound card. I am using it on my laptop (an HP Pavilion zd 7000, which has plenty of power to support the card.) I installed it according to the instructions on the manual, but I have been having some problems with it. I can't seem to set the bit depth and sample rate settings to their proper values.
    The maximum bit depth available from the drop down menu in "Device Control" -> "PCI/USB" tab is 6 bits and the maximum sample rate is 48kHz. I have tried repairing and reinstalling the drivers several times, but it still wont work. The card is connected to my laptop via USB 2.0.
    I looked around in the forms and found out that at least one other person has had the same problem but no solution was posted. If anyone knows of a way to resolve this issue I would appreciate the input!
    Here are my system specs:
    HP Pavilion zd 7000
    Intel Pentium 4 3.06 GHz
    GB Ram
    Windows XP Prof. SP 2
    Thnx.
    -cmsleimanMessage Edited by cmsleiman on -27-2004 09:38 PM

    Well, I am new to high-end sound cards, and I may be misinterpreting the terminology, but the sound card is supposed to be a 24bit/96kHz card.
    I am under the impression that one should be able to set the output quality of the card to 24bits of depth and a 96kHz sample rate, despite the speaker setting that one may be using, to decode good quality audio streams (say an audio cd or the dolby digital audio of a dvd movie.) I can currently achieve this only on 2. speaker systems (or when i set the speaker setting of the card to 2.) Otherwise the maximum bit depth/sample rate I can set the card output to is a sample rate of 48kHz and a bit depth of 6bits.
    Am I mistaken in thinking that if I am playing a good quality audio stream I should be able to raise the output quality of the card to that which it is advertised and claims to have?
    Thnx

  • Importing audio - sample rate/bit depth

    Hi forum,
    I am working on project at 44.1K, 24 bit. Audio elements are being sent to me to be added. Some have come in incorrectly; at 48K, 16 bit. I can convert easily but didn't think I needed to.
    I thought by selecting the "Convert Audio Sample Rate When Importing" option when creating the project that would all be worked out.
    That is what I've done - and the file seems to be the correct pitch -- yet shows up in the audio window with it's original specs (48/16). Also ... will logic keep it at 16 bit and play all other files at 24 bit?
    I want to be sure about this ... something seems fishy.
    Cheers
    Dee, Ottawa

    Hi,
    I am re posting.
    Regarding the same project: must everything in the arrange window of a project be the same sample rate (and bit depth). My understanding is that there is real- time conversion during playback. That all file types supported by logic - and all virtual instrument samples are converted in real time to conform to the selected bit depth and sample rate of project.
    I ask only as I recieved reference sound files to temporarily place in a mix to see how mix will sit when going to Post. Two audio files are almost a semi tone higher than they are supposed to be (which is odd - so I am pretty sure it was just quickly sung in the wrong key at there end). And one file which was supposed to be timed out is not lining up.
    I can work around this on this project. And I can simply convert in the sample editor and re import to compare.
    But again ... I just want to check my understanding for future reference. The manual indicates differing rates etc should not be a problem. (ie: That Logic allows one to have differing rates and bit depths. Inversely, the M Sitter video implies just the opposite.
    I just want to be sure of this for future reference. Any adive?
    Thanks in advance.
    Cheers
    Dee

  • Apple Pro Res 422 HQ export - "render at max depth" and "24 bit or 48 bit depth"?

    I'm exporting my 90 minute feature for DCP using the Apple Pro Res 422 HQ codec. The film is 1920x1080 and that is what the export will be. We used a variety of cameras (Canon 7D, Sony XR160, GoPro, Blackmagic in HD) for the film.
    For the export options:
    Do I check "Render at Maximum Depth"?
    Which do I choose - 24 bit or 48 bit depth? - one has to be chosen even when "Render at Maximum Depth" is unchecked
    When I asked the DCP house, they said that "Render at Maximum Depth doesn't actually do anything when using this codec" and haven't answered the 24 vs. 48 bit question.
    This discussion:
    https://forums.adobe.com/message/4529886#4529886
    says that you "never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video."
    This discussion:
    https://forums.adobe.com/message/5619144#5619144
    adds insight into what 24 vs 48 bit depth means, but doesn't answer my specific question
    Thanks for your help.

    For your reading enjoyment -
    http://forums.adobe.com/message/4529886
    http://images.apple.com/finalcutpro/docs/Apple_ProRes_White_Paper_October_2012.pdf
    A question for you - what is your workflow where you think you might need this? Keep in mind that the majority of cameras only record 8-bit color, and also in a very highly compressed format, so you won't necessarily gain anything by going to 4444, as the source video is of limited quality already.
    Basically, if you don't have any high-bit-depth sources in your timeline to preserve the quality of, there may be little or no benefit to enabling "Max Depth".
    Thanks
    Jeff Pulera
    Safe Harbor Computers

  • Maximum audio sample rate and bit depth question

    Anyone worked out what the maximum sample rates and bit depths AppleTV can output are?
    I'm digitising some old LPs and while I suspect I can get away with 48kHz sample rate and 16 bit depth, I'm not sure about 96kHz sample rate or 24bit resolution.
    If I import recordings as AIFFs or WAVs to iTunes it shows the recording parameters in iTunes, but my old Yamaha processor which accepts PCM doesn't show the source data values, though I know it can handle 96kHz 24bit from DVD audio.
    It takes no more time recording at any available sample rates or bit depths, so I might as well maximise an album's recording quality for archiving to DVD/posterity as I only want to do each LP once!
    If AppleTV downsamples however there wouldn't be much point streaming higher rates.
    I wonder how many people out there stream uncompressed audio to AppleTV? With external drives which will hold several hundred uncompressed CD albums is there any good reason not to these days when you are playing back via your hi-fi? (I confess most of my music is in MP3 format just because i haven't got round to ripping again uncompressed for AppleTV).
    No doubt there'll be a deluge of comments saying that recording LPs at high quality settings is a waste of time, but some of us still prefer the sound of vinyl over CD...
    AC

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • Wraptor DCP and missing Maximum Bit Depth option

    DCPs are made from 12bit JPEG200 frame files wrapped is a MFX container. But there is no Maximum Bit Depth option in Wraptor DCP codec for AME8.
    Does it means:
    1. Wraptor DCP has by default Maximum Bit Depth checked on, so it correctly produces high depth color renders?
    2. OR, Wraptor DCP ignores AME Maximum Bit Depth, so it always renders in 8 bits and than scale up to 12bits (what is a waste of information)?
    The following article implies that option 2 is the correct case, what would be a shame for a such quality demand workflow as DCP production.
    The Video Road – Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more

    Wraptor DCP output is not working for me on a feature length film.  Am I missing something?
    Symptom:  simply hangs at various stages of the job.  No message, never crashes... just STOPS on one frame and never resumes. 
    Hardware:
    OSX 10.9.2 on MacPro late 2013
    Going to try it with an older machine.  Any suggestions?

Maybe you are looking for

  • Cancellation of 101 document - Urgent

    While cancelling a 101 document in MIGO, system is giving error that "Balance for transaction type group 10 negative for the area 01". PO and 101 document were created in 2007. Please respond that why system is giving such error. Regards, Aisha Ishra

  • Oepe-12.1.2.1-kepler how to use ADF Templates in JSP page

    Hi all,     I use oepe 12.1.2.1 kepler 4.3 and creating JSP page but I can't see any ADF Rich Faces Page in JSP Templates Page of Preferences Dialog. How to use ADF Templates in JSP page? Thanks, Thomas

  • Windows crashs when using Airplay

    I recently installed my Apple TV and when I try sharing a video playing on my Windows 7 64bit Notebook it crashes shown the famous blue screen. I tried the iTunes 11, 10.7 and 10.6.  It always crashes. Does anybody else experienced that?

  • How can I get VBA in Excel to read an Apple Mail email?

    Hi I need Excel VBA to be able to read in the contents of an Apple Mail email, with no attachments, from a mailbox I have called "EGP Reports". Can anyone help me? briano216

  • Bex Analyzer Amount  not correct

    Hi , I am facing an issue with Bex Analyzer. When I run a report from Query Designer on the web , the amount key figures displays as $772.97 However in Analyzer ,same amount displays as 0.00000773 . I Any ideas? HD