DNG 1.4 Lossy Compression Bit Depth

Hi,
The SDK have not been yet released but I'm wondering is the DNG lossy compression bitdepth converted to 8bit, converted to something else, or is it keeping the original bitdepth ?
I'm wondering cause I'm mainly using DNG as the format for my 35mm scanned pictures. I did a short test using the lossy compression and it seems impressive, my 185Mb file is reduced to 10.8Mb, and there is only a slight loss. However, as I often to radical color correction in lightroom, alot of tweak in the shadows, I just want to make sure I'm not that suddendly, my corrections will not be apply to a 8bit depth and that I will get a posterized result.
Thanks.

The spec and SDK are available here:
http://www.adobe.com/dng
Yes, the lossy compressed images are in 8 bit. Whether or not you'll get posterization will depend on the circumstances of the original capture, as well as how strong of a correction you apply.  As you say in your case, you tend to make strong color corrections, so you'll simply need to do some more testing to determine whether or not there will be any problems.  Unfortunately I can't think of any other rules of thumb or shortcuts other than direct evaluation with your own eyes.

Similar Messages

  • Why only 10-bit depth dng files from 16-bit Nikon D90 nef files?

    When I convert 16-bit .nef files from my Nikon D90 to DNG I get only 10-bits depth.
    Since the camera should be producing 12-bit depth it seems I am losing information in the conversion, and I don't want that.
    I have installed the 7.1 DNG converter, and I suppose that is what is used when I download from camera memory card through Bridge 5.1 and click dng conversion.
    Same thing if I open the .nef in Photoshop 5.1 , which kicks up CameraRaw converter 6.7.0.339.
    Why is this?
    Can't .dng have more than 10-bit depth?
    Sverk

    Well, according to the user manual and to the review in
    http://www.imaging-resource.com/PRODS/D90/D90A.HTM
    the D90 delivers 12-bit color depth in the .NEF files.
    Of course, I haven't looked at the actual pixel data to find out how finely graded they are.
    What I'm looking at is what Bridge 5.1 (WindowsXP) says about the files in the
    Metadata/ Bit depth entry. 
    In that, the .NEF files are listed as "16-bit" depth (although it will actually hold only 12-bit resolution), but when converted to .DNG it says only  "10-bit",
    and that holds both when the conversion is done automatically during the importing from the camera, and when converting from .nef files afterwards.
    Archiving pictures in the .dng format seems to be a good idea -- but only if no information is lost in the conversion.
    Thus, the "10-bit" info showing in Bridge worries me.
    Might it be that the meaning of bit depth is different in the two file formats?
    Might there be something about the de-mosaicing that necessarily consumes two bits of depth?   Whether in the .dng conversion -- or when saved .nef files are later to be used?
    In other words, for practical purposes, are the formats equivalent in color resolution,
    Or is there indeed a certain loss?
    Maybe a very difficult question, but I'd sure want to have a technical ly definite answer before I dare switch to using the .DNG format all the way.
    Sverk

  • Import RAW files as lossy compressed DNG

    In LR4.2, I would like to be able to import my NEF files as lossy compressed DNGs. That option is not available in preferences, so I have to do the import to DNG, then invoke the "Convert to DNG" in the library menu where I can select lossy compression. Why not make that option available on import so I don't have to perform the second step?

    JimHess wrote:
    Do you really want to import compressed DNG files? They are not really raw files anymore, and they are reduced to 8 bits. That doesn't seem to be a good choice for master images. But the choice is yours, of course.
    To be clear, the default settings used for 'Copy as DNG' *does* compress raw data, but without loss (i.e. lossless, not lossy).
    I know you knew this Jim, but maybe another reader is not so clear...
    PS - I really like the new lossy compressed DNG technology - files behave like raw in terms of editing (white-balance, camera-profile, h/s recovery...), but are much smaller. As long as one realizes that the data will suffer loss, and be pared down to 8-bits (which isn't as bad as it sounds, since it does NOT use the same linear encoding scheme as raw data), then it can be a great option during import, if you don't plan on making big prints...

  • DNG lossy compressed - the workflow side

    The technical side of DNG has already been discussed in other threads. I want to focus on the workflow side here. Viktoria and I have talked about this previously but I wanted to bring it into this forum.
    The task is to use lossy compressed DNG files for (way faster) uploading to a service. Service works on the files and sends back a new Lightroom catalog. What we're hoping for was a possibility to import the new settings from that returned catalog and have them applied to the RAW files in the local source catalog.
    Victoria though she had tried it once and it worked, but I was not able to confirm this. When I "Import from Another Catalog", this is what I end up with after I imported the altered DNGs. They now are side by side in the catalog, the original, unaltered NEFs and the altered (black & white) DNGs.
    Also the field "Changed Existing Photos" in the import dialog was greyed out and said "(none found)". Which is actually kinda expected behavior, isn't it? If you import the same image in a different image format (NEF vs DNG) then Lightroom keeps both as individual files.
    Is the necessary workflow for this to use DNG locally as well and ditch the NEFs? Can someone help and elaborate on this functionality?

    Yes, and it might be a good reason for switching to a DNG workflow. If the DNGs and NEFs have matching file names, you could us my Syncomatic plugin to copy most adjustments .

  • Bit-depth of RAW/DNG files in Lightoom 5

    According to the Canon website (Canon U.S.A. : Support & Drivers : EOS 5D Mark II), my EOS 5D Mark II produces a 14-bit RAW file, yet when I bring it into LR5 (as a .dng), the exposure slider goes only from -5 to +5 (10 stops); why doesn't it go from -7 (or -8) to +7 (+8)?

    PediEyeDoc wrote:
    According to the Canon website (Canon U.S.A. : Support & Drivers : EOS 5D Mark II), my EOS 5D Mark II produces a 14-bit RAW file, yet when I bring it into LR5 (as a .dng), the exposure slider goes only from -5 to +5 (10 stops); why doesn't it go from -7 (or -8) to +7 (+8)?
    Bit depth has nothing to do with dynamic range assuming that's what you're referring to with the +/- 'stops'.
    There are very few true 16-bit capture devices that we can afford. Adobe considers anything higher than 8-bits per color to simply be referred to as "16-bit" even if they are not. It's pointless to show 12 vs. 14 bit and so on, the real factor is, do we have more than 8-bits per color to edit with?**
    In fact, even Photoshop doesn't use 16-bit but rather 15+1 bit, has since day one, LR is probably the same, not that it matters.
    **The Bit-Depth Decision | DigitalPhotoPro.com

  • Converting DNG to Lossy Compression DNG files in LR4. SWP file?

    I am trying to convert some DNG files that were created in LR3 to the Lossy compression in LR4. Lr4 creates a SWP file and the DNG file is not compressed. What is the SWP file and what do I need to do to correctly compress the DNG files?

    SWP file is a temporary file. The conversion will take some time, after the file is successfully converted, the SWP file is removed...

  • Can't import a JPEG: "video bit depth of this file is unsupported"

    Re above error message when attempting to import into a Premiere CS6 project.  I used Media Info to inspect the file's properties as per below. All of the video in my project  is 8bit video. Any help is much appreciated..
    Image
    Format                                     : JPEG
    Width                                        : 1 306 pixels
    Height                                       : 979 pixels
    Color space                             : YUV
    Bit depth                                   : 8 bits
    Compression mode                : Lossy
    Stream size                              : 2.83 MiB (100%)

    Hi Jeff, thanks!!
    I guess one cannot import cannot import a CMYK image in Premiere at all.
    When I inspected the image in Media Info it ID'd it as a YUV and so I assumed that the imagwe was converted for use in a video that they'd created sometime in the past.
    Is there even such a format as YUV for still images? Probably not even though Media Info ID'd it as such.
    Anyway, Photoshop ID'd it as CMYK and when converted to RGB that did the trick! 
    Thanks!

  • Sony - when will you step up to fix A7r and lossy compression?

    Sony - Nikon was sued for their D600 issues.  Why not do the right thing and address the A7r and lossy compression? You have a number of experts discussing the A7r shutter shock and Sony's lossy compression scheme and not yet a response from Sony.  Sony advertises that their cameras provide 14 bit color but the Sony compression algorithm never delivers more tha 8 bits of color depth according to file sizes.  This is deceptive and may amount to fraud.  Users want a camera that at the least provides an option for TRUE 14 bit color depth (even at the expense of frames per second) and a software update that eliminates shooting in the shutter range where shutter shock and blur are an issue. http://www.josephholmes.com/news-sonya7rshuttershake.html http://blog.kasson.com/?p=4674 http://diglloyd.com/blog/2014/20140212_2-SonyA7-RawDigger-posterization.html http://diglloyd.com/blog/2014/20140116_1-Sony-A7R-shutter-vibration.html  

    I'm interested in the upcoming A7S, before releasing it I hope, this time you make sure: 1) There is no light leaks of any kind, whatsoever. 2) Implement same kind of anti-shutter-shock 2s delay as in Olympus OM-D EM5 has in firmware, if you can't fix the shutter vibrations mechanically. 3) Provide a way to store truly lossless raw (should be easier with the smaller 12mpx sensor, maybe..?) Just ask some of the professionals who have wined about these issues have a go with the A7S before release to make sure it doesn't have these or some other similiar issues that hurts your reputation after release.. Tnx

  • Not working in collection: "is not" & "Digital Negative/Lossy compressed"

    Hello guys,
    I have a smart collection in LR4. The collection contains 200 pictures. I want to search inside the collection for all files which are NOT Lossy compressed (like raw, jpeg,dng lossless compressed etc). If I put as a rule the "is not" "Digital Negative/Lossy compressed" I will get all the files in the catalog. If I put as a rule the "is" "Digital Negative/Lossless" the result will be DNG/Lossless. The 1st rule seems not to be working. The same is for Fast load data, which seems for some files to be working and for some not. Anyone can explain it, or it is a bug?
    Boris

    Hi Carl,
    Based on my tests, only users allowed to use specific applications are able to see those application on the Web Access page.
    In my case, I published three remote application, and I only allowed Administrator to access one of them as below:
    After that, I logged on as administrator to RD Web Access, only one application was showing:
    I suggest you double check the user assignment on the application, make sure no other group is configured, then log off the user and log on to access RD Web Access again.
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • Final cut pro millions of colours + bit depth question

    Hello
    I am working in final cut pro 7 and I wanted to know what is the maximum bit depth I can export using the Prores codec? All I see in compression settings for rendering my timeline when wanting to render with Prores 4444 is the option for 'millions of colors' and 'millions of colors +' I was under the impression that millions of colors refered to 8 bit... does the alpha channel mean I can get 10 bit? can the alpha channel hold 2 more bits per channel or something? Or is there no way I can export a 10bit file using the Prores codec within fcp7..? is it all just 8bit. -and when I select 422HQ there is no advanced options for millions of colors..what does this mean? is the only way to get 10bit out of fcp7 to render with the 10bit uncompressed codec? and if so can I render the timeline in prores while im working with it then delete all the renders and change the render codec to 10bit uncompressed, will this now be properly giving me 10bit from the original 4444 12 bit files i imported in the beginning..?
    Any help is much appreciated

    ProRes is 10-bit. Every ProRes codec is 10-bit...LT, 422, HQ.  Not one of them is 8-bit.  Except for ProRes 444...that's 12 bit.

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

  • Bit Depth and Render Quality

    When you finally export media to some sort of media format via the encoder does the projects preview Bit Depth and Render Quality settings affect the output file?
    I know there is "Use Preview files" setting in the media exporter dialogue but I just want to be sure of what I am doing.

    Jeff's response is my perspective, as well, which is both backed up by my own tests and the official Adobe word.
    Exhibit A: My Tests
    That is DV footage with a title superimposed over it in a DV sequence, with a Gaussian blur effect (the Premiere accelerated one) applied to the title; all samples are from that sequence exported back to DV. This was to show the relative differences of processing between software and hardware MPE, Premiere export and AME queueing, and the effect of the Maximum Bit Depth and Maximum Render Quality options on export (not the sequence settings; those have no bearing on export).
    The "blooming" evident in the GPU exports is due to hardware MPE's linear color processing. I think it's ugly, but that's not the point here. Further down the line, you can see the effect of Maximum Bit Depth (and MRQ) on both software MPE and hardware MPE. I assume you can see the difference between the Maximum Bit Depth-enabled export and the one without. Bear in mind that this is 8-bit DV footage composited and "effected" and exported back to 8-bit DV. I don't understand what your "padding with zeroes" and larger file size argument is motivated by--my source files and destination files are the same size due to the DV codec--but it's plainly clear that Maximum Bit Depth has a significant impact on output quality. Similar results would likely be evident if I used any of the other 32-bit enabled effects; many of the color correction filters are 32-bit, and should exhibit less banding, even on something 8-bit like DV.
    Exhibit B: The Adobe Word
    This is extracted from Karl Soule's blog post, Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more. This section comes from Adobe engineer Steve Hoeg:
    1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 8-bit frame,
    apply the color corrector to the 8-bit frame to get another 8-bit frame,
    then write DV at 8-bit.
    2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DV at 8-bit. The color corrector working on the 32-bit
    blurred frame will be higher quality then the previous example.
    3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DPX at 10-bit. This will be still higher quality
    because the final output format supports greater precision.
    4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag.
    We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit
    frame, apply the color corrector to the 8-bit frame to get another
    8-bit frame, then write 10-bit DPX from 8-bit data.
    5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag.
    We will import the 10-bit DPX file, apply the blur to get an 32-bit
    frame, apply the color corrector to the 32-bit frame to get another
    32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.
    6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.
    7. A title with a gradient and a blur on a 10-bit monitor
    (with hardware acceleration enabled.) This will render the blur in
    32-bit, then display at 10-bit. The gradient should be smooth.
    Bullet #2 is pretty much what my tests reveal.
    I think the Premiere Pro Help Docs get this wrong, however:
    High-bit-depth effects
    Premiere Pro includes some video effects and transitions
    that support high-bit-depth processing. When applied to high-bit-depth
    assets, such as v210-format video and 16-bit-per-channel (bpc) Photoshop
    files, these effects can be rendered with 32bpc pixels. The result
    is better color resolution and smoother color gradients with these
    assets than would be possible with the earlier standard 8 bit per
    channel pixels. A 32-bpc badge appears
    to the right of the effect name in the Effects panel for each high-bit-depth
    effect.
    I added the emphasis; it should be obvious after my tests and the quote from Steve Hoeg that this is clearly not the case. These 32-bit effects can be added to 8-bit assets, and if the Maximum Bit Depth flag is checked on export, those 32-bit effects are processed as 32-bit, regardless of the destination format of the export. Rendering and export/compression are two different processes altogether, and that's why using the Maximum Bit Depth option has far more impact than "padding with zeroes." You've made this claim repeatedly, and I believe it to be false.
    Your witness...

  • Apple Pro Res 422 HQ export - "render at max depth" and "24 bit or 48 bit depth"?

    I'm exporting my 90 minute feature for DCP using the Apple Pro Res 422 HQ codec. The film is 1920x1080 and that is what the export will be. We used a variety of cameras (Canon 7D, Sony XR160, GoPro, Blackmagic in HD) for the film.
    For the export options:
    Do I check "Render at Maximum Depth"?
    Which do I choose - 24 bit or 48 bit depth? - one has to be chosen even when "Render at Maximum Depth" is unchecked
    When I asked the DCP house, they said that "Render at Maximum Depth doesn't actually do anything when using this codec" and haven't answered the 24 vs. 48 bit question.
    This discussion:
    https://forums.adobe.com/message/4529886#4529886
    says that you "never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video."
    This discussion:
    https://forums.adobe.com/message/5619144#5619144
    adds insight into what 24 vs 48 bit depth means, but doesn't answer my specific question
    Thanks for your help.

    For your reading enjoyment -
    http://forums.adobe.com/message/4529886
    http://images.apple.com/finalcutpro/docs/Apple_ProRes_White_Paper_October_2012.pdf
    A question for you - what is your workflow where you think you might need this? Keep in mind that the majority of cameras only record 8-bit color, and also in a very highly compressed format, so you won't necessarily gain anything by going to 4444, as the source video is of limited quality already.
    Basically, if you don't have any high-bit-depth sources in your timeline to preserve the quality of, there may be little or no benefit to enabling "Max Depth".
    Thanks
    Jeff Pulera
    Safe Harbor Computers

  • Bit Depth and Bit Rate

    I have a pre recorded mp3 VO. I placed it into a track bed in GB. Clients wants a compressed audio file with bit depth: 16 bit and bitrate: 128kps max, but recommends 96kbps. If I need to adjust the bit depth and bite rate, can I do it in GB? and if so, where? Thanks for any help.

    Please be aware that Bit Depth and Bit Rate are two completely different things!
    They belong to a group of buzz words that belong to Digital Audio and that is the field we are dealing with when using GarageBand or any other DAW. Some of those terms pop up even in iTunes.
    Digital Audio
    To better understand what they are and what they mean, here is a little background information.
    Whenever dealing with Digital Audio, you have to be aware of two steps, that convert an analog audio signal into a digital audio signal. These magic black boxes are called ADC (Analog Digital Converter) and “on the way back”, DAC (Digital Analog Converter).
    Step One: Sampling
    The analog audio (in the form of an electric signal like from an electric guitar) is represented by a waveform. The electric signal (voltage) changes up and down in a specific form that represents the “sound” of the audio signal. While the audio signal is “playing”, the converter measure the voltage every now and then. These are like “snapshots” or samples, taken at a specific time. These specific time intervals are determined by a “Rate”, it tells you how often per seconds something happens. The unit is Hertz [Hz] defined as “how often per seconds” or “1/s”. A Sample Rate of 48kHz means that the converter takes 48,000 Samples per second.
    Step Two: Quantize (or digitize)
    All these Samples are still analog, for example, 1.6Volt, -0.3Volt, etc. But this analog value now has to be converted into a digital form of 1s and 0s.This is done similar to quantizing a note in GarageBand. The value (i.e. the note) cannot have any position, it  has to be placed on a grid with specific values (i.e. 1/16 notes). The converter does a similar thing. It provides a grid of available numbers that the original measured Sample has to be rounded to (like when a note get shifted in GarageBand by the quantize command). This grid, the amount of available numbers, is called the Bit Depth. Other terms like Resolution or Sample Size are also used. A Bit Depth of 16bit allows for 65,535 possible values.
    So the two parameters that describe the quality of an Digital Audio Signal are the Sample Rate (“how often”) and the Bit Depth (“how fine of a resolution”). The very simplified rule of thumb is, the higher the Sample Rate, the higher the possible frequency, and the higher the Bit Depth, the higher the possible dynamic.
    Uncompressed Digital Audio vs. Compressed Digital Audio
    So far I haven’t mentioned the “Bit Rate” yet. There is a simple formula that describes the Bit Rate as the product of Sampel Rate and Bit Depth: Sample Rate * Bit Depth = Bit Rate. However, Bit Depth and how it is used (and often misused and misunderstood) has to do with Compressed Digital Audio.
    Compressed Digital Audio
    First of all, this has nothing to do with a compressor plugin that you use in GarageBand. When talking about compressed digital audio, we talk about data compression. This is a special form how to encode data to make the size of the data set smaller. This is the fascinating field of “perceptual coding” that uses psychoacoustic models to achieve that data compression. Some smart scientists found out that you can throw away some data in a digital audio signal and you wouldn’t even notice it, the audio would still sound the same (or almost the same). This is similar to a movie set. If you shoot a scene on a street, then you only need the facade of the buildings and not necessary the whole building.
    Although the Sample Rate is also a parameter of uncompressed digital audio, the Bit Depth is not. Instead, here is the Bit Rate used. The Bit Rate tells the encoder the maximum amount of bits it can produce per second. This determines how much data it has to throw away in order to stay inside that limit. An mp3 file (which is a compressed audio format) with a Bit Rate of 128kbit/s delivers a decent audio quality. Raising the Bit Rate to 256bit/s would increase the sound quality. AAC (which is technically an mp4 format) uses a better encoding algorithm. If this encoder is set to 128kbit/s, it produces a better audio quality because it is smarter to know which bits to throw away and which one to keep.
    Conclusion
    Whenever you are dealing with uncompressed audio (aiff, wav), the two quality parameters are Sample Rate [kHz] and Bit Depth [bit] (aka Resolution, aka Bit Size)
    Whenever you are dealing with compressed audio (mp3, AAC), the two quality parameters are Sample Rate [kHz] and Bit Rate [kbit/s]
    If you look at the Export Dialog Window in GarageBand, you can see that the Quality popup menu is different for mp3/AAC and AIFF. Hopefully you will now understand why.
    Hope that helps
    Edgar Rothermich
    http://DingDingMusic.com/Manuals/
    'I may receive some form of compensation, financial or otherwise, from my recommendation or link.'

  • Creative Audigy 2 NX Bit Depth / Sample Rate Prob

    This is my first post to this form
    Down to business: I recently purchased a Creative Audigy 2 NX sound card. I am using it on my laptop (an HP Pavilion zd 7000, which has plenty of power to support the card.) I installed it according to the instructions on the manual, but I have been having some problems with it. I can't seem to set the bit depth and sample rate settings to their proper values.
    The maximum bit depth available from the drop down menu in "Device Control" -> "PCI/USB" tab is 6 bits and the maximum sample rate is 48kHz. I have tried repairing and reinstalling the drivers several times, but it still wont work. The card is connected to my laptop via USB 2.0.
    I looked around in the forms and found out that at least one other person has had the same problem but no solution was posted. If anyone knows of a way to resolve this issue I would appreciate the input!
    Here are my system specs:
    HP Pavilion zd 7000
    Intel Pentium 4 3.06 GHz
    GB Ram
    Windows XP Prof. SP 2
    Thnx.
    -cmsleimanMessage Edited by cmsleiman on -27-2004 09:38 PM

    Well, I am new to high-end sound cards, and I may be misinterpreting the terminology, but the sound card is supposed to be a 24bit/96kHz card.
    I am under the impression that one should be able to set the output quality of the card to 24bits of depth and a 96kHz sample rate, despite the speaker setting that one may be using, to decode good quality audio streams (say an audio cd or the dolby digital audio of a dvd movie.) I can currently achieve this only on 2. speaker systems (or when i set the speaker setting of the card to 2.) Otherwise the maximum bit depth/sample rate I can set the card output to is a sample rate of 48kHz and a bit depth of 6bits.
    Am I mistaken in thinking that if I am playing a good quality audio stream I should be able to raise the output quality of the card to that which it is advertised and claims to have?
    Thnx

Maybe you are looking for

  • How to validate a program in background

    hello folks, i need to validate my program, my program should run only in background in case if it runs in foreground it should give error message. how to validate my program. can any one just tell the procedure how to do it. any help will be greatly

  • PR Release without classification

    Hi , In case of a PR release without classification are following things possible 1] Like in PR Release with classification , we can maintain Currency  as a characteristics & then system calculates rate equivalent to base currency maintained in value

  • I cannot able to save indesign document using sdk?

    hi,    i opened a indesign template using sdklayouthelper file and then update the text content in the template.After that i try to save(using sdklayouthelper file saveDocumentAs method) but i could not able to save a document as xxx.indd. pls,anyone

  • Creating backend services via Struts plugin

    I have a design problem where I'm redeveloping the front end of our application, and in an effort to reduce coupling and preserve tier separation, I've implemented a Struts plugin which is comprised of "service" objects that use the business delegate

  • LPX issues with LR, BFD, and the GUI looks like Garageband now...

    I downloaded LPX a few days ago.  Everything worked fine except BFD Eco, which is what I use almost exclusively for drum samples (primarily demo work at this stage). I also bought an iPad Air per rave reviews of the LR app.  Got that setup, and it wo