How is actual bit depth measured

I am analyzing some recordings I made in 24-bit format in Audacity.  Audacity can record true 24-bit integer files which Audition 3.0.1 recognizes as such.
After checking a couple of the files in Audition 3.0.1, I found that the meaning of "Actual bit depth" in the amplitude statistics is not entirely clear.  It does seem to be based on the maximum peak in the file, but the bit estimate does not seem too clear.
For example, in one of the files if I select any portion that includes the highest peak and get amplitude statistics, the actual bit depth reported is 24.  Example of a short selection that includes the peak:
Mono
Min Sample Value:    -22003
Max Sample Value:    26329
Peak Amplitude:    -1.9 dB
Possibly Clipped:    0
DC Offset:    -.003
Minimum RMS Power:    -44.45 dB
Maximum RMS Power:    -17.58 dB
Average RMS Power:    -30.18 dB
Total RMS Power:    -25.98 dB
Actual Bit Depth:    24 Bits
Using RMS Window of 50 ms
However, as far as I can tell, any selection in the same file that does not inlude the highest peak (but may include nearby close peaks) results in actual bit depth of 16:
Mono
Min Sample Value:    -20082
Max Sample Value:    22172
Peak Amplitude:    -3.39 dB
Possibly Clipped:    0
DC Offset:    -.001
Minimum RMS Power:    -54.14 dB
Maximum RMS Power:    -19.96 dB
Average RMS Power:    -36.26 dB
Total RMS Power:    -32.95 dB
Actual Bit Depth:    16 Bits
Using RMS Window of 50 ms
So it is unclear what level of peak amplitude distinguishes between 24- and 16-bit actual depth.  If the bit-depth analysis is based on most-significant bits being zero, I would think that the trigger for identifying 16-bit actual depth in a 24-bit file would be to find that the 8 most-significant bits of the 24-bit samples are zero for all samples in the selection.  So for a 24-bit integer file to have actual bit depth of 16 bits for a slection, the greatest peak would be less than -48 dBFS.  But in the example above, the distinction seems to be having a peak amplitidue around -3.4 dB versus -1.9 dB.

>what actual difference does it make to anything?
Hard to say what difference it makes to anything without knowing what "actual bit depth" actually measures.  It could be important, or could be useless.  In the past I have not paid much attention to it because it is poorly described.  It recently came to my attention because the files from a recent recording in 24-bit integer format were all reported as 16-bit "actual" bit depth.  This is in contrast to some previous recordings made in the same way which were identified as 24-bit "actual".  This implies there might be something different in the data formatting, the communication between the software and driver, between the driver and card, or something else.
It is a bit surprising that no one got Synt. to explain it properly.
>Oh, and the other thing about 24-bit int files is that they can lead you into a very false sense of security. If you decided, for instance, to reduce the amplitude of one by 48dB, then save it, and then decide to increase it again by that 48dB, you'd end up with a 24-bit file with just 16 bits of resolution - simply because it's an integer file. If you did the same thing with Audition's 32-bit floating point system, you'd lose no resolution at all.
In my workflow that produces original recordings in a 24-bit integer file format, the format is an efficient way of storing 24-bit integer data from a 24-bit card.  Processing is another matter.  I use the Audition preference to convert files automatically to 32-bit when opening.

Similar Messages

  • Maximum bit depth-maximum render quality when dynamic linking

    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?

    Hi jbach2,
    I understand your confusion.  I'm like that most of the time I'm working. *chuckle*  The two settings you mentioned are two completely different parameters affecting (or is it effecting) your video. You do not need to enable them within the sequence itself unless you want to preview video on you program monitor at the highest quality.  I personally don't recommend it, as it's a tremendous resource hog, (the program even warns you when you try to click them) and unessecary for improving final output.  Again, do not enable these options in your sequence settings if you are only wanting a high quality export. Doing so will greatly reduce your editing performance unless you have a high-end system. ...and even then I don't think its worth it unless you're editing on a huge screen with a Director who wants to see everything at a maximum quality during the edit process.
    Keeping it simple...
    Resizing your final output video? Use Maximum bit depth.
    Starting or working with high bitdepth sources? Use Max Bit Depth.
    When/where do I enable these? In the AME only. ^_^
    Why?:
    Enabling the Max bit and Max render only needs to be done when you are exporting.  They both serve different functions. 
    Max Render aids in the scaling/conversion process only.  My understanding is that you never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video.  For example, when rendering a 1080p timeline out to a 480p file format, you'll want to use MRQ to ensure the best scaling with the least amount of artifacts and aliasing.  If you're exporting at the same size you're working with, DON'T enable MRQ.  It will just cost you time and CPU. Its only function is to do a high quality resizing of your work.
    Maximum bit depth increases the color depth that your video is working with and rendering to.  If you're working with video that has low color depth, then I don't believe it will matter.  However, if you're working with 32 bit color on your timeline in PPro and/or After Effects, using lots of graphics, high contrast values, or color gradients, you may want to enable this option. It ultimately depends on the color depth of your source material.
    The same applies to After Effects.
    Create something in AE like a nice color gradient.  Now switch the same project between 8,16,32 bit depth, and you will see a noticable difference in how the bit depth effects your colors and the smoothness of the gradient.
    Bit depth effects how different plugins/effects change your overall image.  Higher depth means more colors to work with (and incidentally, more cpu you need)
    Just remember that "DEPTH" determines how many colors you can "fill your bucket with" and "QUALITY" is just that, the quality of your "resize".
    http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html
    Check out this adobe blog for more info on color depth ^_^  Hope that helps!
    ----a lil excerpt from the blog i linked to above---
    Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.
    Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32″ icons on some of the effects in Premiere Pro CS5.
    jbach2 wrote:
    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?
    Message was edited by: SnJK

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

  • How to read 32 bit depth .bmp image

    How to read 32 bit depth .bmp image using LabVIEW?
    @nk
    Solved!
    Go to Solution.

    A "standard" image, by which I mean an image type most typically encountered, stores image data in RGB format with 8 bits per colour, making them 24-bit images (24bbp - bits per pixel). A 32-bit image normally includes an additional 8 bits for the alpha channel, but in BMP files this format is complex (see wiki article), and it appears the LabVIEW Read BMP function does not support it.
    Do you have the IMAQ toolkit? I can't test the theory, but perhaps the IMAQ functions for reading image files are more advanced and can read your images?
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

  • How do I reduce the bit depth of images to 1-bit within Acrobat 9?

    I am hoping a simple solution exists within Acrobat 9 for reducing the bit-depth of images to 1-bit.
    I know of two methods that both seem more like workarounds. One, edit the image using Photoshop. Two, without Photoshop, export the page as a 1-bit PNG and recreate the page in Acrobat. It seems like one of the preflight fixups should be able to get it done with the right settings. But, it's a labyrinth of unfamiliarity.

    There's no predefined 1-bit conversion in Preflight because it doesn't make sense. Preflight will not dither bitmaps, so most images will become black squares. Extreme color conversion is only intended for text/vector objects.
    If you want to try it anyway, you can create a custom Fixup if you have a  1-bit ICC profile.
    Preflight > Single Fixups
    Options menu > Create new Preflight Fixup
    Name it something like "Convert all to 1-bit BW"
    Search for "Convert colors" in the type of fixup box and add it
    Destination tab > Destination > your ICC profile for 1-bit black
    Uncheck "use destination from Output Intent"
    Keep everything else as default, though I'd suggest using "Embed as output intent for PDF/X" if you're making PDF/X documents
    Conversion Settings tab > All Objects + Any Color (except spot) + Convert to destination + Use rendering intent
    Press the + key to duplicate this step, and change the second copy to "Spot Color(s)"
    Press + again and change the third copy to "Registration color"
    Save the fixup and run it.
    In case you don't have a 1-bit  ICC profile installed, I've attached one.

  • Why only 10-bit depth dng files from 16-bit Nikon D90 nef files?

    When I convert 16-bit .nef files from my Nikon D90 to DNG I get only 10-bits depth.
    Since the camera should be producing 12-bit depth it seems I am losing information in the conversion, and I don't want that.
    I have installed the 7.1 DNG converter, and I suppose that is what is used when I download from camera memory card through Bridge 5.1 and click dng conversion.
    Same thing if I open the .nef in Photoshop 5.1 , which kicks up CameraRaw converter 6.7.0.339.
    Why is this?
    Can't .dng have more than 10-bit depth?
    Sverk

    Well, according to the user manual and to the review in
    http://www.imaging-resource.com/PRODS/D90/D90A.HTM
    the D90 delivers 12-bit color depth in the .NEF files.
    Of course, I haven't looked at the actual pixel data to find out how finely graded they are.
    What I'm looking at is what Bridge 5.1 (WindowsXP) says about the files in the
    Metadata/ Bit depth entry. 
    In that, the .NEF files are listed as "16-bit" depth (although it will actually hold only 12-bit resolution), but when converted to .DNG it says only  "10-bit",
    and that holds both when the conversion is done automatically during the importing from the camera, and when converting from .nef files afterwards.
    Archiving pictures in the .dng format seems to be a good idea -- but only if no information is lost in the conversion.
    Thus, the "10-bit" info showing in Bridge worries me.
    Might it be that the meaning of bit depth is different in the two file formats?
    Might there be something about the de-mosaicing that necessarily consumes two bits of depth?   Whether in the .dng conversion -- or when saved .nef files are later to be used?
    In other words, for practical purposes, are the formats equivalent in color resolution,
    Or is there indeed a certain loss?
    Maybe a very difficult question, but I'd sure want to have a technical ly definite answer before I dare switch to using the .DNG format all the way.
    Sverk

  • Color Space and Bit Depth - What Makes Sense?

    I'm constantly confused about which color space and bit depth to choose for various things.
    Examples:
    - Does it make any sense to choose sRGB and 16-bits? (I thought sRGB was 8-bit by nature, no?)
    - Likewise for AdobeRGB - are the upper 8-bits empty if you use 16-bits?
    - What is the relationship between Nikon AdobeWide RGB, and AdobeRGB? - if a software supports one, will it support the other?
    - ProPhoto/8-bits - is there ever a reason?...
    I could go on, but I think you get the idea...
    Any help?
    Rob

    So, it does not really make sense to use ProPhoto/8 for output (or for anything else I guess(?)), even if its supported, since it is optimized for an extended gamut, and if your output device does not encompass the gamut, then you've lost something since your bits will be spread thinner in the "most important" colors.
    Correct, you do not want to do prophotoRGB 8bit anything. It is very easy to get posterization with it. Coincidentally, if you print from Lightroom and let the driver manage and do not check 16-bit output, Lightroom outputs prophotoRGB 8bits to the driver. This is rather annoying as it is very easy to get posterizaed prints this way.
    It seems that AdobeRGB has been optimized more for "important" colors and so if you have to scrunch down into an 8-bit jpeg, then its the best choice if supported - same would hold true for an 8-bit tif I would think (?)
    Correct on both counts. If there is color management and you go 8 bits adobeRGB is a good choice. This is only really true for print targets though as adobeRGB encompasses more of a typical CMYK gamut than sRGB. For display targets such as the web you will be better off always using sRGB as 99% of displays are closer to that and so you don't gain anything. Also, 80% of web browsers is still not color managed.
    On a theoretical note: I still don't understand why if image data is 12 or 14 bits and the image format uses 16 bits, why there has to be a boundary drawn around the gamut representation. But for practical purposes, maybe it doesn't really matter.
    Do realitze hat the original image in 12 to 14 bits is in linear gamma as that is how the sensor reacts to light. However formats for display are always gamma corrected for efficiency, because the human eye reacts non-linearly to light and because typical displays have a gamma powerlaw response of brightness/darkness. Lightroom internally uses a 16-bit linear space. This is more bits than the 12 or 14 bits simply to avoid aliasing errors and other numeric errors. Similarly the working space is chosen larger than the gamut cameras can capture in order to have some overhead that allows for flexibility and avoids blowing out in intermediary stages of the processing pipeline. You have to choose something and so prophotoRGB, one of the widest RGB spaces out there is used. This is explained quite well here.
    - Is there any reason not to standardize 8-bit tif or jpg files on AdobeRGB and leave sRGB for the rare cases when legacy support is more important than color integrity?
    Actually legacy issues are rampant. Even now, color management is very spotty, even in shops oriented towards professionals. Also, arguably the largest destination for digital file output, the web, is almost not color managed. sRGB remains king unfortunately. It could be so much better if everybody used Safari or Firefox, but that clearly is not the case yet.
    - And standardize 16 bit formats on the widest gamut supported by whatever you're doing with it? - ProPhoto for editing, and maybe whatever gamut is recommended by other software or hardware vendors for special purposes...
    Yes, if you go 16 bits, there is no point not doing prophotoRGB.
    Personally, all my web photos are presented through Flash, which supports AdobeRGB even if the browser proper does not. So I don't have legacy browsers to worry about myself.
    Flash only supports non-sRGB images if you have enabled it yourself. NONE of the included flash templates in Lightroom for example enable it.
    that IE was the last browser to be upgraded for colorspace support (ie9)
    AFAIK (I don't do windows, so I have not tested IE9 myself), IE 9 still is not color managed. The only thing it does is when it encounters a jpeg with a ICC profile different than sRGB is translate it to sRGB and send that to the monitor without using the monitor profile. That is not color management at all. It is rather useless and completely contrary to what Microsoft themselves said many years ago well behaved browsers should do. It is also contrary to all of Windows 7 included utilities for image display. Really weird! Wide gamut displays are becoming more and more prevalent and this is backwards. Even if IE9 does this halfassed color transform, you can still not standardize on adobeRGB as it will take years for IE versions to really switch over. Many people still use IE6 and only recently has my website's access switched over to mostly IE8. Don't hold your breath for this.
    Amazingly, in 2010, the only correctly color managed browser on windows is still Safari as Firefox doesn't support v4 icc monitor profiles and IE9 doesn't color manage at all except for translating between spaces to sRGB which is not very useful. Chrome can be made to color manage on windows apparently with a command line switch. On Macs the situation is better since Safari, Chrome (only correctly on 10.6) and Firefox (only with v2 ICC monitor profiles) all color manage. However, on mobile platforms, not a single browser color manages!

  • Bit Depth and Bit Rate

    I have a pre recorded mp3 VO. I placed it into a track bed in GB. Clients wants a compressed audio file with bit depth: 16 bit and bitrate: 128kps max, but recommends 96kbps. If I need to adjust the bit depth and bite rate, can I do it in GB? and if so, where? Thanks for any help.

    Please be aware that Bit Depth and Bit Rate are two completely different things!
    They belong to a group of buzz words that belong to Digital Audio and that is the field we are dealing with when using GarageBand or any other DAW. Some of those terms pop up even in iTunes.
    Digital Audio
    To better understand what they are and what they mean, here is a little background information.
    Whenever dealing with Digital Audio, you have to be aware of two steps, that convert an analog audio signal into a digital audio signal. These magic black boxes are called ADC (Analog Digital Converter) and “on the way back”, DAC (Digital Analog Converter).
    Step One: Sampling
    The analog audio (in the form of an electric signal like from an electric guitar) is represented by a waveform. The electric signal (voltage) changes up and down in a specific form that represents the “sound” of the audio signal. While the audio signal is “playing”, the converter measure the voltage every now and then. These are like “snapshots” or samples, taken at a specific time. These specific time intervals are determined by a “Rate”, it tells you how often per seconds something happens. The unit is Hertz [Hz] defined as “how often per seconds” or “1/s”. A Sample Rate of 48kHz means that the converter takes 48,000 Samples per second.
    Step Two: Quantize (or digitize)
    All these Samples are still analog, for example, 1.6Volt, -0.3Volt, etc. But this analog value now has to be converted into a digital form of 1s and 0s.This is done similar to quantizing a note in GarageBand. The value (i.e. the note) cannot have any position, it  has to be placed on a grid with specific values (i.e. 1/16 notes). The converter does a similar thing. It provides a grid of available numbers that the original measured Sample has to be rounded to (like when a note get shifted in GarageBand by the quantize command). This grid, the amount of available numbers, is called the Bit Depth. Other terms like Resolution or Sample Size are also used. A Bit Depth of 16bit allows for 65,535 possible values.
    So the two parameters that describe the quality of an Digital Audio Signal are the Sample Rate (“how often”) and the Bit Depth (“how fine of a resolution”). The very simplified rule of thumb is, the higher the Sample Rate, the higher the possible frequency, and the higher the Bit Depth, the higher the possible dynamic.
    Uncompressed Digital Audio vs. Compressed Digital Audio
    So far I haven’t mentioned the “Bit Rate” yet. There is a simple formula that describes the Bit Rate as the product of Sampel Rate and Bit Depth: Sample Rate * Bit Depth = Bit Rate. However, Bit Depth and how it is used (and often misused and misunderstood) has to do with Compressed Digital Audio.
    Compressed Digital Audio
    First of all, this has nothing to do with a compressor plugin that you use in GarageBand. When talking about compressed digital audio, we talk about data compression. This is a special form how to encode data to make the size of the data set smaller. This is the fascinating field of “perceptual coding” that uses psychoacoustic models to achieve that data compression. Some smart scientists found out that you can throw away some data in a digital audio signal and you wouldn’t even notice it, the audio would still sound the same (or almost the same). This is similar to a movie set. If you shoot a scene on a street, then you only need the facade of the buildings and not necessary the whole building.
    Although the Sample Rate is also a parameter of uncompressed digital audio, the Bit Depth is not. Instead, here is the Bit Rate used. The Bit Rate tells the encoder the maximum amount of bits it can produce per second. This determines how much data it has to throw away in order to stay inside that limit. An mp3 file (which is a compressed audio format) with a Bit Rate of 128kbit/s delivers a decent audio quality. Raising the Bit Rate to 256bit/s would increase the sound quality. AAC (which is technically an mp4 format) uses a better encoding algorithm. If this encoder is set to 128kbit/s, it produces a better audio quality because it is smarter to know which bits to throw away and which one to keep.
    Conclusion
    Whenever you are dealing with uncompressed audio (aiff, wav), the two quality parameters are Sample Rate [kHz] and Bit Depth [bit] (aka Resolution, aka Bit Size)
    Whenever you are dealing with compressed audio (mp3, AAC), the two quality parameters are Sample Rate [kHz] and Bit Rate [kbit/s]
    If you look at the Export Dialog Window in GarageBand, you can see that the Quality popup menu is different for mp3/AAC and AIFF. Hopefully you will now understand why.
    Hope that helps
    Edgar Rothermich
    http://DingDingMusic.com/Manuals/
    'I may receive some form of compensation, financial or otherwise, from my recommendation or link.'

  • Maximum audio sample rate and bit depth question

    Anyone worked out what the maximum sample rates and bit depths AppleTV can output are?
    I'm digitising some old LPs and while I suspect I can get away with 48kHz sample rate and 16 bit depth, I'm not sure about 96kHz sample rate or 24bit resolution.
    If I import recordings as AIFFs or WAVs to iTunes it shows the recording parameters in iTunes, but my old Yamaha processor which accepts PCM doesn't show the source data values, though I know it can handle 96kHz 24bit from DVD audio.
    It takes no more time recording at any available sample rates or bit depths, so I might as well maximise an album's recording quality for archiving to DVD/posterity as I only want to do each LP once!
    If AppleTV downsamples however there wouldn't be much point streaming higher rates.
    I wonder how many people out there stream uncompressed audio to AppleTV? With external drives which will hold several hundred uncompressed CD albums is there any good reason not to these days when you are playing back via your hi-fi? (I confess most of my music is in MP3 format just because i haven't got round to ripping again uncompressed for AppleTV).
    No doubt there'll be a deluge of comments saying that recording LPs at high quality settings is a waste of time, but some of us still prefer the sound of vinyl over CD...
    AC

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • Can I change the bit depth on images in pdf files?

    I have a lot of pdf files that were scanned in 24 bit colour. I'd like to convert some of them to greyscale or black and white, and reduce the resolution to make them smaller.
    I can see how to reduce the resolution with Save As Other/Optimized PDF, but there are no options there to reduce bit depth. Is there any way to do this?

    Thanks, I think I've worked out how to use them. I found a fixup called "Convert color to B/W", but it seems to convert to greyscale, not black and white.
    I found this page describing how to convert to both greyscale and monochrome. It says the only way to do monochrome is to convert to tiff first:
    http://blogs.adobe.com/acrolaw/2009/10/converting-color-pdf-to-greyscale-pdf-an-update/
    If that's the case then Acrobat Pro isn't going to help me, but that was written in 2009. Does anyone know if true black and white conversion has been made available since then?

  • PS-CC 2014 (latest) - changing 32-bit depth file to 16-bit

    I opened a file, happened to be 32-bit depth (I don't have too many, not even sure how that one got to be 32-bit), and because a lot of filters don't work with that, I changed it to 16-bit, but when I did that, the HDR toning dialogue appeared, and I HAD to click OK on it to get the image to convert to 16-bit (CANCEL on the dialogue leaves it at 32-bit). So you have to choose a METHOD that you can set so there's no change to the image. Weird & wrong . . .

    Sorry, no solution to the problem, but a confirmation. I do have the same problem (Intuos Pro and Pen & Touch) showing the problem with ACR 8.5 and Photoshop (2014, CC and 6, these were updated with ACR 8.5) . No such problem before ACR 8.5 and no problem with LR 5.5 (also containing ACR 8.5).
    I hope there will be a solution from Adobe soon, since sems to be caused by the ACR update.
    Windows 8.1 in my case and latest Wacom Intuos driver installed.
    Thomas

  • Bit depth question

    Hello,
    My audio device can sample up to 24-bit. When I create a 32-bit file in Audition 2.0 and record material using the audio device, Audition tells me the file is 32-bit, and it is indeed twice the size of an equivalent 16-bit file. But is it really a 32-bit file, and could there be any issues with the file? It seems fine in every way.
    Thank you.

    No audio hardware actually samples at greater than 24-bit, because there's absolutely no point - even 24-bit depth isn't actually usable in full; a system like this could in theory digitise a noise floor way lower than can be physically achieved by any mic and preamp system available - you'd need at least liquid nitrogen cooling of all the components before you even started to look at the rest of the problems!
    So why does Audition record in 32-bit? Well, 32-bit Floating Point digitising is a bit different. The actual 24-bit signal is recorded quite faithfully (although not quite in the form of an integer signal) and the other 8 bits are essentially left as zeros during recording. What they actually are is scaling bits. And this comes in seriously useful when processing. What it means is that your original signal can be scaled up and down without loss. In an integer engine, if you decided to throw away 30dB of a signal, saved the result and reopened the file and amplified it again, you'd find that your 24-bit signal was effectively 19-bit. In Audition, if you did exactly the same thing with a 32-bit Floating Point signal, you wouldn't lose any bit depth at all. No it's not magic - it's just the effect of storing the original data in a form that inherenently doesn't get modified when an amplitude change is asked for - it's only the scaling system that does, and this doesn't contain audio data.
    So yes it's a real 32-bit signal - but not all of those 32 bits are used until you do some processing.

  • PSD keeps saving image as 32 bit depth (not 24)

    I may be using some incorrect terminology/calculations.
    I have some third-party presentation software that requires all images be 8-bit. So I used Photoshop for all photo editing and made sure my PSDs were 8-bit before exporting stills (PNGs) for my presentation. 50+ PNGs were successfully created this way, but i'm having problems with a single file. When I open the correct files in Windwos Pic/Fax View, right click, show properties, go to Summery (advanaced), all the correct files have a bit depth listed as 24.
    My problematic PSD says it's in 8-bit, but the PNGs that I create from it are consistently listed as having 32 bit depth in Windows Pic/Fax View. As a test I exported other images formats (TIFFs, JPEGs, etc...) and they all came out as 32 bit depth. How do I fix this? I tried opening the PSD, saving as different name, switching to 16-bit and then back to 8-bit, still didn't work.

    I don't think I have an alpha channel. The workflow to create this image is identical to all other images (none of them have alphas). Any way I can check whether an alpha inadvertently was included?
    Thanks for also explaining the math behind 8 bit -> 24 bit lol.

  • Turning on Render at Maximum Bit Depth and Maximum Render Quality crashes render every time

    I've tried a few times to render an H264 version of my Red media project with Maximum Bit Depth and Maximum Render Quality.  Premiere crashes every time.  I have GPUs enabled. Are people using these settings with Red media and successfully rendering?

    To answer your specific question did you see the tooltip?
    I beleive it allows for 32-bit processing (16-bit if unchecked). Per the project settings help file at http://helpx.adobe.com/premiere-elements/using/project-settings-presets.html
    Maximum Bit Depth
    Allows Premiere Elements to use up to 32‑bit processing, even if the project uses a lower bit depth. Selecting this option increases precision but decreases performance.
    The help file for export is somewhat less informative about what it actually does but does point out that it is the color bit depth - http://helpx.adobe.com/media-encoder/using/encode-export-video-audio.html
    (Optional) Select Use Maximum Render Quality or Render At Maximum Bit Depth. Note:  Rendering at a higher color bit depth requires more RAM and slows rendering substantially.
    In practice the simplest suggestion is to export twice - once with / once without the setting and compare the time taken and perceived quality.
    Cheers,
    Neale
    Insanity is hereditary, you get it from your children
    If this post or another user's post resolves the original issue, please mark the posts as correct and/or helpful accordingly. This helps other users with similar trouble get answers to their questions quicker. Thanks.

  • 16 bit depth photo restoration, older version of Photoshop.

    I use an older version of Photoshop.  It is able to import and read a 16 bit depth file.  Though it is limited in what it can do with this bit depth, it can do the levels and curves adjustments on an image. I want to have the best quality scan to start with for photo restoration in my older Photoshop. I won't be able to directly import the file with my older Photoshop from the scanner.  If I scan a photo as a 16 bit 600 ppi image, I'm afraid color information will be lost when I open it in the older Photoshop.  Is there any way I can open and save such a file without losing all that good color information? I know I would need to save it in a format that supports 16 bit depth like png versus jpeg.

    Not exactly sure how Image Capture works on lion, but i belive i read that since os x 10.5 that it should scan in 16 bits/channel
    Are you able to use the software that came with the scanner instead of Image Capture or maybe you need to update your scanner software or
    possibly check the preference in the Image Capture scanner preferences to use Twain Software When Possible.
    Or scanner software like VueScan which is much better than most software that ships with scanners.
    http://www.hamrick.com/
    Anyway if the scan is saved as a 16 bits/channel tif as output by the scanner to a place (folder) on your hard drive, then ps7 should open it as such.

Maybe you are looking for

  • Problem with creation of the jms event generator

    hi, I have set up a weblogic workshop instance on one of my machines. It has a queue call OUT.QUEUE. The application writes message to this queue. I have set up an integration instance on another machine. I have created a Foreign JMS server to point

  • R/3 forms on Portal

    Hi, System: We have EP6SP2, ITS and R/3 4.7 installed on three diff machines. Situation: When I create SmartForm and call it in the portal via the ITS, automatically the output goes to Adobe Acrobat Reader from where I can print the form output. Prob

  • Resources/Books/Blogs , any suggestion?

    Hi, I am new to BPM and found that the documentation is not detailed enough. I would appreciate any suggestions for resources, books, blogs etc .. Thanks

  • Unable to start WebDynpro Content Administrator.....

    Hello All,         I am trying access webDynpro Content Administrator from ContentAdmin-> webDynpro but I am getting message <b>Missing Permissiones</b> <b>You do not have the needed permissions to start the Web Dynpro Content Administrator. Please c

  • [SOLVED] grub2 - error you need to load the linux kernel first

    I can boot into windows just fine but ran into this error when I try to boot into archlinux. I've tried both root=/dev/disk/by-uuid/69a3679e-f730-4ccc-8977-c0ef89bdea8a and root=/dev/sda6 but they do not work. $ sudo fdisk -l Disk /dev/sda: 320.0 GB,