Save Bit Depth in MAX

I set Bit Depth to be 12 for my Linescan camera. It's OK when I view the image frame in MAX, but it is shown in 16 bit when I view it in Matlab. Is there any way that I can save it into 12 bit image data instead of 16 bit? The version of MAX is 4.5.
Thanks

You can not address anything smaller than a byte in your computer memory. So your 12 bit will be converted to 16 bit.
Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)

Similar Messages

  • Apple Pro Res 422 HQ export - "render at max depth" and "24 bit or 48 bit depth"?

    I'm exporting my 90 minute feature for DCP using the Apple Pro Res 422 HQ codec. The film is 1920x1080 and that is what the export will be. We used a variety of cameras (Canon 7D, Sony XR160, GoPro, Blackmagic in HD) for the film.
    For the export options:
    Do I check "Render at Maximum Depth"?
    Which do I choose - 24 bit or 48 bit depth? - one has to be chosen even when "Render at Maximum Depth" is unchecked
    When I asked the DCP house, they said that "Render at Maximum Depth doesn't actually do anything when using this codec" and haven't answered the 24 vs. 48 bit question.
    This discussion:
    https://forums.adobe.com/message/4529886#4529886
    says that you "never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video."
    This discussion:
    https://forums.adobe.com/message/5619144#5619144
    adds insight into what 24 vs 48 bit depth means, but doesn't answer my specific question
    Thanks for your help.

    For your reading enjoyment -
    http://forums.adobe.com/message/4529886
    http://images.apple.com/finalcutpro/docs/Apple_ProRes_White_Paper_October_2012.pdf
    A question for you - what is your workflow where you think you might need this? Keep in mind that the majority of cameras only record 8-bit color, and also in a very highly compressed format, so you won't necessarily gain anything by going to 4444, as the source video is of limited quality already.
    Basically, if you don't have any high-bit-depth sources in your timeline to preserve the quality of, there may be little or no benefit to enabling "Max Depth".
    Thanks
    Jeff Pulera
    Safe Harbor Computers

  • How is actual bit depth measured

    I am analyzing some recordings I made in 24-bit format in Audacity.  Audacity can record true 24-bit integer files which Audition 3.0.1 recognizes as such.
    After checking a couple of the files in Audition 3.0.1, I found that the meaning of "Actual bit depth" in the amplitude statistics is not entirely clear.  It does seem to be based on the maximum peak in the file, but the bit estimate does not seem too clear.
    For example, in one of the files if I select any portion that includes the highest peak and get amplitude statistics, the actual bit depth reported is 24.  Example of a short selection that includes the peak:
    Mono
    Min Sample Value:    -22003
    Max Sample Value:    26329
    Peak Amplitude:    -1.9 dB
    Possibly Clipped:    0
    DC Offset:    -.003
    Minimum RMS Power:    -44.45 dB
    Maximum RMS Power:    -17.58 dB
    Average RMS Power:    -30.18 dB
    Total RMS Power:    -25.98 dB
    Actual Bit Depth:    24 Bits
    Using RMS Window of 50 ms
    However, as far as I can tell, any selection in the same file that does not inlude the highest peak (but may include nearby close peaks) results in actual bit depth of 16:
    Mono
    Min Sample Value:    -20082
    Max Sample Value:    22172
    Peak Amplitude:    -3.39 dB
    Possibly Clipped:    0
    DC Offset:    -.001
    Minimum RMS Power:    -54.14 dB
    Maximum RMS Power:    -19.96 dB
    Average RMS Power:    -36.26 dB
    Total RMS Power:    -32.95 dB
    Actual Bit Depth:    16 Bits
    Using RMS Window of 50 ms
    So it is unclear what level of peak amplitude distinguishes between 24- and 16-bit actual depth.  If the bit-depth analysis is based on most-significant bits being zero, I would think that the trigger for identifying 16-bit actual depth in a 24-bit file would be to find that the 8 most-significant bits of the 24-bit samples are zero for all samples in the selection.  So for a 24-bit integer file to have actual bit depth of 16 bits for a slection, the greatest peak would be less than -48 dBFS.  But in the example above, the distinction seems to be having a peak amplitidue around -3.4 dB versus -1.9 dB.

    >what actual difference does it make to anything?
    Hard to say what difference it makes to anything without knowing what "actual bit depth" actually measures.  It could be important, or could be useless.  In the past I have not paid much attention to it because it is poorly described.  It recently came to my attention because the files from a recent recording in 24-bit integer format were all reported as 16-bit "actual" bit depth.  This is in contrast to some previous recordings made in the same way which were identified as 24-bit "actual".  This implies there might be something different in the data formatting, the communication between the software and driver, between the driver and card, or something else.
    It is a bit surprising that no one got Synt. to explain it properly.
    >Oh, and the other thing about 24-bit int files is that they can lead you into a very false sense of security. If you decided, for instance, to reduce the amplitude of one by 48dB, then save it, and then decide to increase it again by that 48dB, you'd end up with a 24-bit file with just 16 bits of resolution - simply because it's an integer file. If you did the same thing with Audition's 32-bit floating point system, you'd lose no resolution at all.
    In my workflow that produces original recordings in a 24-bit integer file format, the format is an efficient way of storing 24-bit integer data from a 24-bit card.  Processing is another matter.  I use the Audition preference to convert files automatically to 32-bit when opening.

  • Maximum Bit Depth /Maximum Render Quality  Questions

    Maximum Bit Depth
    If my project contains high-bit-depth assets generated by high-definition camcorders, I was told to select Maximum Bit Depth because Adobe Premiere Pro uses all the color information in these assets when processing effects or generating preview files. I'm capturing HDV using the Matrox RTX-2 Hardware in Matrox AVI format.
    When I finally export my project using Adobe Media Encoder CS4, will selecting Maximum Bit Depth provide better color resolution once I post to Blu-ray format?
    Maximum Render Quality
    I was told that by using Maximum Render Quality, I maintain sharp detail when scaling from large formats to smaller formats, or from high-definition to standard-definition formats as well as maximizes the quality of motion in rendered clips and sequences. It also renders moving assets more sharply. It's my understanding that at maximum quality, rendering takes more time, and uses more RAM than at the default normal quality. I'm running Vista 64 Bit with 8 GIGs of RAM so I'm hoping to take advantage of this feature.
    Will this also help to improve better resolution when I finally export my project using Adobe Media Encoder CS4 and post to Blu-ray format?
    Does it look like I have the specs to handle Maximum Bit Depth and Maximum Render Quality when creating a new HDV project with the support of the Matrox RTX 2 Hardware capturing in Matrox AVI format? See Below Specs.
    System Specs
    Case: Coolmaster-830
    Op System: Vista Ultima 64 Bit
    Edit Suite: Adobe Creative Suite 4 Production Premium Line Upgrade
    Adobe Premiere Pro CS 4.0.1 update before installing RT.X2 Card and 4.0 tools
    Performed updates on all Adobe Production Premium Products as of 03/01/2009
    Matrox RTX2 4.0 Tools
    Main Display: Dell 3007 30"
    DVI Monitor: Dell 2408WFP 24"
    MB: ASUS P5E3 Deluxe/WiFi-AP LGA 775 Intel X38
    Display Card: SAPPHIRE Radeon HD 4870 512MB GDDR5 Toxic ver.
    PS: Corsair|CMPSU-1000HX 1000W
    CPU: INTEL Quad Core Q9650 3G
    MEM: 2Gx4|Corsair TW3X4G1333C9DHXR DDR3 (8 Gigs Total)
    1 Sys Drive: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB
    Cache SATA 3.0Gb/s
    2 Raid 0: Seagate Barracuda 7200.11 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s Using Intel's integrared Raid Controller on MB

    Just some details that i find useful on maximum render depth
    You really need it even with 8bit source files, when using heavy grading/multiple curves/vignettes. If after grading you see banding, go to sequence > sequence settings from the top menu and check "maximum bit depth (ignore the performance popup), then check again your preview (it will change in a second) to see if banding is still present in 32bit mode. If no banding, you must check it when exporting, if  banding is still there, change your grading, then uncheck it to continue with editing.
    Unfortunately Maximum bit depth exporting is extremely time-consuming, but can really SAVE YOUR DAY when facing artifacts after heavy grading, by completely or almost completely eliminating banding and other unwanted color distortions.
    Use it only for either small previews or the really final output.
    Best Regards.

  • Can I change the bit depth on images in pdf files?

    I have a lot of pdf files that were scanned in 24 bit colour. I'd like to convert some of them to greyscale or black and white, and reduce the resolution to make them smaller.
    I can see how to reduce the resolution with Save As Other/Optimized PDF, but there are no options there to reduce bit depth. Is there any way to do this?

    Thanks, I think I've worked out how to use them. I found a fixup called "Convert color to B/W", but it seems to convert to greyscale, not black and white.
    I found this page describing how to convert to both greyscale and monochrome. It says the only way to do monochrome is to convert to tiff first:
    http://blogs.adobe.com/acrolaw/2009/10/converting-color-pdf-to-greyscale-pdf-an-update/
    If that's the case then Acrobat Pro isn't going to help me, but that was written in 2009. Does anyone know if true black and white conversion has been made available since then?

  • Multiple layers in 32 bit depth PSD

    Hi there,
    Am I missing something, or can't I have multiple layers in a PSD file when working in 32 bit mode? (In particular, I am using CS5).
    I am working on a feature to save image data to PSD and would, if possible, prefer to be able to store the individual layers in 32 bit depth.
    Thank you in advance

    Hi layeredtargets,
    You should be able to have and save layered files in documents that are 32 bits/channel in cs5.
    If the documents are over a certain size then you would have to save as a .psb file and some
    things like certain adjustment layers and filters are not available in the 32 bits/channel mode.
    Are you trying to put the individual layers into separate files?

  • Bit Depth

    When I save my wav file, after editing it in Soundbooth, I have to decide
    what bit depth to use. 192 seems to be the default, and 320 is maximum (which I assume is 16th power). This will be for a video, and the sound was collected with an off camera Edirol--good, but not professional quality. Suggestions?
    kdoc

    Thanks Bee Jay
    Stephen

  • Bit Depth and Render Quality

    When you finally export media to some sort of media format via the encoder does the projects preview Bit Depth and Render Quality settings affect the output file?
    I know there is "Use Preview files" setting in the media exporter dialogue but I just want to be sure of what I am doing.

    Jeff's response is my perspective, as well, which is both backed up by my own tests and the official Adobe word.
    Exhibit A: My Tests
    That is DV footage with a title superimposed over it in a DV sequence, with a Gaussian blur effect (the Premiere accelerated one) applied to the title; all samples are from that sequence exported back to DV. This was to show the relative differences of processing between software and hardware MPE, Premiere export and AME queueing, and the effect of the Maximum Bit Depth and Maximum Render Quality options on export (not the sequence settings; those have no bearing on export).
    The "blooming" evident in the GPU exports is due to hardware MPE's linear color processing. I think it's ugly, but that's not the point here. Further down the line, you can see the effect of Maximum Bit Depth (and MRQ) on both software MPE and hardware MPE. I assume you can see the difference between the Maximum Bit Depth-enabled export and the one without. Bear in mind that this is 8-bit DV footage composited and "effected" and exported back to 8-bit DV. I don't understand what your "padding with zeroes" and larger file size argument is motivated by--my source files and destination files are the same size due to the DV codec--but it's plainly clear that Maximum Bit Depth has a significant impact on output quality. Similar results would likely be evident if I used any of the other 32-bit enabled effects; many of the color correction filters are 32-bit, and should exhibit less banding, even on something 8-bit like DV.
    Exhibit B: The Adobe Word
    This is extracted from Karl Soule's blog post, Understanding Color Processing: 8-bit, 10-bit, 32-bit, and more. This section comes from Adobe engineer Steve Hoeg:
    1. A DV file with a blur and a color corrector exported to DV without the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 8-bit frame,
    apply the color corrector to the 8-bit frame to get another 8-bit frame,
    then write DV at 8-bit.
    2. A DV file with a blur and a color corrector exported to DV with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DV at 8-bit. The color corrector working on the 32-bit
    blurred frame will be higher quality then the previous example.
    3. A DV file with a blur and a color corrector exported to DPX with the max bit depth flag. We
    will import the 8-bit DV file, apply the blur to get an 32-bit frame,
    apply the color corrector to the 32-bit frame to get another 32-bit
    frame, then write DPX at 10-bit. This will be still higher quality
    because the final output format supports greater precision.
    4. A DPX file with a blur and a color corrector exported to DPX without the max bit depth flag.
    We will clamp 10-bit DPX file to 8-bits, apply the blur to get an 8-bit
    frame, apply the color corrector to the 8-bit frame to get another
    8-bit frame, then write 10-bit DPX from 8-bit data.
    5. A DPX file with a blur and a color corrector exported to DPX with the max bit depth flag.
    We will import the 10-bit DPX file, apply the blur to get an 32-bit
    frame, apply the color corrector to the 32-bit frame to get another
    32-bit frame, then write DPX at 10-bit. This will retain full precision through the whole pipeline.
    6. A title with a gradient and a blur on a 8-bit monitor. This will display in 8-bit, may show banding.
    7. A title with a gradient and a blur on a 10-bit monitor
    (with hardware acceleration enabled.) This will render the blur in
    32-bit, then display at 10-bit. The gradient should be smooth.
    Bullet #2 is pretty much what my tests reveal.
    I think the Premiere Pro Help Docs get this wrong, however:
    High-bit-depth effects
    Premiere Pro includes some video effects and transitions
    that support high-bit-depth processing. When applied to high-bit-depth
    assets, such as v210-format video and 16-bit-per-channel (bpc) Photoshop
    files, these effects can be rendered with 32bpc pixels. The result
    is better color resolution and smoother color gradients with these
    assets than would be possible with the earlier standard 8 bit per
    channel pixels. A 32-bpc badge appears
    to the right of the effect name in the Effects panel for each high-bit-depth
    effect.
    I added the emphasis; it should be obvious after my tests and the quote from Steve Hoeg that this is clearly not the case. These 32-bit effects can be added to 8-bit assets, and if the Maximum Bit Depth flag is checked on export, those 32-bit effects are processed as 32-bit, regardless of the destination format of the export. Rendering and export/compression are two different processes altogether, and that's why using the Maximum Bit Depth option has far more impact than "padding with zeroes." You've made this claim repeatedly, and I believe it to be false.
    Your witness...

  • Maximum bit depth-maximum render quality when dynamic linking

    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?

    Hi jbach2,
    I understand your confusion.  I'm like that most of the time I'm working. *chuckle*  The two settings you mentioned are two completely different parameters affecting (or is it effecting) your video. You do not need to enable them within the sequence itself unless you want to preview video on you program monitor at the highest quality.  I personally don't recommend it, as it's a tremendous resource hog, (the program even warns you when you try to click them) and unessecary for improving final output.  Again, do not enable these options in your sequence settings if you are only wanting a high quality export. Doing so will greatly reduce your editing performance unless you have a high-end system. ...and even then I don't think its worth it unless you're editing on a huge screen with a Director who wants to see everything at a maximum quality during the edit process.
    Keeping it simple...
    Resizing your final output video? Use Maximum bit depth.
    Starting or working with high bitdepth sources? Use Max Bit Depth.
    When/where do I enable these? In the AME only. ^_^
    Why?:
    Enabling the Max bit and Max render only needs to be done when you are exporting.  They both serve different functions. 
    Max Render aids in the scaling/conversion process only.  My understanding is that you never need to enable the Max Render Quality (MRQ) unless you are exporting in a format/pixel ratio different from your original video.  For example, when rendering a 1080p timeline out to a 480p file format, you'll want to use MRQ to ensure the best scaling with the least amount of artifacts and aliasing.  If you're exporting at the same size you're working with, DON'T enable MRQ.  It will just cost you time and CPU. Its only function is to do a high quality resizing of your work.
    Maximum bit depth increases the color depth that your video is working with and rendering to.  If you're working with video that has low color depth, then I don't believe it will matter.  However, if you're working with 32 bit color on your timeline in PPro and/or After Effects, using lots of graphics, high contrast values, or color gradients, you may want to enable this option. It ultimately depends on the color depth of your source material.
    The same applies to After Effects.
    Create something in AE like a nice color gradient.  Now switch the same project between 8,16,32 bit depth, and you will see a noticable difference in how the bit depth effects your colors and the smoothness of the gradient.
    Bit depth effects how different plugins/effects change your overall image.  Higher depth means more colors to work with (and incidentally, more cpu you need)
    Just remember that "DEPTH" determines how many colors you can "fill your bucket with" and "QUALITY" is just that, the quality of your "resize".
    http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html
    Check out this adobe blog for more info on color depth ^_^  Hope that helps!
    ----a lil excerpt from the blog i linked to above---
    Now, 8-bit, 10-bit, and 12-bit color are the industry standards for recording color in a device. The vast majority of cameras use 8-bits for color. If your camera doesn’t mention the color bit depth, it’s using 8-bits per channel. Higher-end cameras use 10-bit, and they make a big deal about using “10-bit precision” in their literature. Only a select few cameras use 12-bits, like the digital cinema camera, the RED ONE.
    Software like After Effects and Premiere Pro processes color images using color precision of 8-bits, 16-bits, and a special color bit depth called 32-bit floating point. You’ve probably seen these color modes in After Effects, and you’ve seen the new “32″ icons on some of the effects in Premiere Pro CS5.
    jbach2 wrote:
    Hi
    A bit confused by the use of Maximum bit depth and Maximum render quality as used both in Sequence Settings and also as options when rendering in AME.
    1 Do you need to explicitly enable these switches in the sequence for best quality or, do you simply need to switch them on in AME when you render in Media Encoder?
    2 When dynamic linking to After Effects, when should you use an 8 bit vs 16 or 32 bit working space, and, how does this bit depth interact with the maximum bit depth, maximum render quality in PPro?
    Message was edited by: SnJK

  • Changing bit depth

    Is there any way of changing bit depth in A3? I scan 48 and 16 bit TIFs and would like to drop certain pics to 24/8 (for storage reasons). I'd obviously prefer not to have to export to PS if possible. Thanks.

    I don't know the technical trade-offs of _how_ the reduction in bit depth is done, but there must be a simple, probably automate-able and even batch-able way to effect the conversion.  As an example, you could set Aperture "External Editor File Format" to "TIFF (8-bit)" and simply run "Edit with {external editor}", save the file in the external editor, and then back in Aperture delete your 16-bit originals.
    The point -- which you've already understood -- is that you are creating new, different files and adding them to the list of files already listed in your Aperture Library.  For your workflow, this makes sense and seems a good use of Aperture.  You may find that the cost savings represented by the reduction in storage space achieved by lowering the bit-depth is not worth the administrative cost of creating these replacement files and deleting the others.

  • 16 bit depth photo restoration, older version of Photoshop.

    I use an older version of Photoshop.  It is able to import and read a 16 bit depth file.  Though it is limited in what it can do with this bit depth, it can do the levels and curves adjustments on an image. I want to have the best quality scan to start with for photo restoration in my older Photoshop. I won't be able to directly import the file with my older Photoshop from the scanner.  If I scan a photo as a 16 bit 600 ppi image, I'm afraid color information will be lost when I open it in the older Photoshop.  Is there any way I can open and save such a file without losing all that good color information? I know I would need to save it in a format that supports 16 bit depth like png versus jpeg.

    Not exactly sure how Image Capture works on lion, but i belive i read that since os x 10.5 that it should scan in 16 bits/channel
    Are you able to use the software that came with the scanner instead of Image Capture or maybe you need to update your scanner software or
    possibly check the preference in the Image Capture scanner preferences to use Twain Software When Possible.
    Or scanner software like VueScan which is much better than most software that ships with scanners.
    http://www.hamrick.com/
    Anyway if the scan is saved as a 16 bits/channel tif as output by the scanner to a place (folder) on your hard drive, then ps7 should open it as such.

  • How do I reduce the bit depth of images to 1-bit within Acrobat 9?

    I am hoping a simple solution exists within Acrobat 9 for reducing the bit-depth of images to 1-bit.
    I know of two methods that both seem more like workarounds. One, edit the image using Photoshop. Two, without Photoshop, export the page as a 1-bit PNG and recreate the page in Acrobat. It seems like one of the preflight fixups should be able to get it done with the right settings. But, it's a labyrinth of unfamiliarity.

    There's no predefined 1-bit conversion in Preflight because it doesn't make sense. Preflight will not dither bitmaps, so most images will become black squares. Extreme color conversion is only intended for text/vector objects.
    If you want to try it anyway, you can create a custom Fixup if you have a  1-bit ICC profile.
    Preflight > Single Fixups
    Options menu > Create new Preflight Fixup
    Name it something like "Convert all to 1-bit BW"
    Search for "Convert colors" in the type of fixup box and add it
    Destination tab > Destination > your ICC profile for 1-bit black
    Uncheck "use destination from Output Intent"
    Keep everything else as default, though I'd suggest using "Embed as output intent for PDF/X" if you're making PDF/X documents
    Conversion Settings tab > All Objects + Any Color (except spot) + Convert to destination + Use rendering intent
    Press the + key to duplicate this step, and change the second copy to "Spot Color(s)"
    Press + again and change the third copy to "Registration color"
    Save the fixup and run it.
    In case you don't have a 1-bit  ICC profile installed, I've attached one.

  • Lion reduced the screen bit depth - any solution?

    When I was on Snow Leopard I was painting in Photoshop in 16-bit mode on my 21" Wacom Cintiq.
    It was a fantastic experience to finally have true control in settle changes in hues and shades all over the spectre.
    Now that I've updated to Lion, the screen doesn't display true 16-bit graphics anymore, but instead there is pretty noticable banding. A sort of real-time downsampling in bit-depth going on. Even flickering on some grey hues.
    I'm very sad about this, and it's affecting my work.
    I have a hunch Lion is harder on old graphics cards than Snow Leopard was.
    Is my machine too old for displaying true 16-bit/ 32-bit graphics with Lion?
    In the system info under the graphics card it says: 32-bits color (ARGB8888), but it clearly isn't the case in terms of what is sent to the display.
    MacBook Pro, late 2009, 8GB ram, 2,66 GHz Intel Core 2 Duo, Unibody
    NVIDIA GeForce 9600M GT 256 MB
    What I've tried so far (without solving the problem)
    Closing Macbook screen to run only one display (Cintiq) (to save graphics memory, I thought) - no change.
    Trieid displaying the same 16-bit image in different apps: Photoshop, Preview, Quicklook, Pages - no change
    Forced monitor into "billions of colors" using SwitchResX - no real change
    Anyone who knows more about this?
    I will now try:
    - finding Cintiq driver updates

    I've tried to update the Cintiq driver, but in regards of bit depth, nothing changed.
    My suspicion is that the graphics driver has to be updated/ optimized. And, judging from the overall performance dip my computer had after upgrading to Lion, it might be a big task, involving more than just the graphics driver. Perhaps the whole rendering engine?

  • Saving in 96KHz 24 Bit-Depth

    I am a newbee and I have a problem. I just imported a Premier Pro project into Adobe Audition. The Premier Pro project was linked to 96Hz sample rate, 24 bit-depth audio and I have the sample rate preference in Audition set to 96Hz, yet when I selected to "Edit in Adobe Audition" it imported the audio files into Audition as 48Hz audio files and I can only save the session at 48Hz sample rate and 16 bit-depth. Why is this? Also, Premier Pro linked to the media on my hard drive, while Audition seems to have imported the media into the project. Is this normal?
    Thanks

    codywinton92 wrote:
    It depends on what you mean by saved. When I imported the .wav audio files into Premier Pro, it didn't copy them into the project, but linked to the files on my hard drive. I doubt that it converted them down, because when I checked the meta data of the files in Premier Pro's media bin, they registered as 96khz 24 bit audio files.
    Well, assuming that it works in a similar way to Audition (and we have to until somebody who actually knows what happens in PP tells us), it could well be that the files are stored as 96k/24 on your drive (hence the metadata report), but when imported into a PP project they get opened at the project rate. And if that's 48k/16, then that's what will get linked to Audition - the local copy files as they actually are in the session, rather than what you have stored.
    Audition does the same thing. If you have a file stored at one rate, but a multitrack session running at a different one, then it will only let you put the contents of that file into your different-rate session after it's created a local copy at the correct rate.
    What you (or we...) need is somebody who's familiar with doing the round trip between Audition and PP, and actually knows what happens. It's all very well for me to guess, but I must admit that I'm only doing this because nobody else has attempted to answer you.
    One thing is pretty certain, though. And that is that what the help desk told you is absolutely irrelevant. This is about file handling, not drivers. Sometimes I really wonder about some of their crib-sheets!

  • Converting JPEG to BMP 1 bit depth

    Hi
    I need to convert JPEG to BMP 1 bit depth (monochromatic)
    I've used this code, but the result is always a 24 bit depth BMP...
    How can I do?
    JPEGDecodeParam param = JPEGCodec.getDefaultJPEGEncodeParam(1, JPEGDecodeParam.COLOR_ID_GRAY);
    JPEGImageDecoder jd = JPEGCodec.createJPEGDecoder(myByteArrayInputStream, param );
    BufferedImage bufferedImage = jd.decodeAsBufferedImage();
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    ImageIO.write(bufferedImage, "bmp", baos);
    Thanks

    An application that you already have that can do the job is Preview. Open the file in Preview and do a Save As, seleting PICT as the end format.
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.

Maybe you are looking for

  • Setting Up Second Display - ADC to DVi - DVI to HDMi to Widescreen TV

    Trying to set up the widescreen TV as a second monitor for visitors. Here's the info: ATI Radeon 7500: Chipset Model: ATY,RV200 Type: Display Bus: AGP Slot: SLOT-1 VRAM (Total): 32 MB Vendor: ATI (0x1002) Device ID: 0x5157 Revision ID: 0x0000 ROM Rev

  • Error importing from BIAR file (BO XI 3.0)

    Dear BO community, I am using the BIAR tool for backup purposes. For my backup, I use a properties file as input which looks like the one shown below: action=exportXML exportBiarLocation=C:/Temp/BOBackup.biar userName=Administrator password=XXX CMS=X

  • Is this hosted by Oracle or hosted at my shop?

    I am unclear about the project marvel environment. Is it strictly hosted as seen here, or is is something we set up and run ourselves, like Portal?

  • Wacom driver won't install on OSX 10.8.4

    Hi folks! I just got my Intous5 today (on the day they annouced the new ones...) and I haven't been able to install the driver - at the very end of the process it gives me an Error message "Installation Failed" without further explanation. I've follo

  • Where can I obtain a copy of FireFox 12.0

    FireFox 13 takes entirely too long to load Msn.com. I wish to go back to FireFox 12 until such a time as webpage loading problems with FireFox 13 and beyond are resolved.