16 bit tiff to 12 bit conversion

I need a algorithm to convert 16-bit tiff files with pixel values ranging from -32768 to 32768 to 12-bit values ranging from 0 to 4095. The algorithm provided by NI (http://zone.ni.com/devzone/explprog.nsf/6c163603265406328625682a006ed37d/10560a596bddd342862566c70062a743?OpenDocument) doing a 16->8 bit conversion seems not to work if the brightness values are negative [-32768..0[. Is any help out there ???

Hi,
1) Convert the number to I32,
2) Add a (I32 constand with value) 32768 to it.
The result is a 16 bit positive number, ranging from 0 to 65535.
3) Use 'Rotate Right With Carry' (Carry = FALSE) on the result.
We have a 15 bit integer.
4) repeat 3) , 14 bits result
5) repeat 3) , 13 bits result
6) repeat 3) , 12 bits result, ranging from 0 to 4095 !
Offcause you can make a for loop repeating step 3) fout times, but don't
forget to put the result in a shift register!
This is, Btw. a very crude way of color reduction...
Regards,
Wiebe.
"mino" wrote in message
news:[email protected]..
> I need a algorithm to convert 16-bit tiff files with pixel values
> ranging from -32768 to 32768 to 12-bit valu
es ranging from 0 to 4095.
> The algorithm provided by NI
>
(http://zone.ni.com/devzone/explprog.nsf/6c163603265406328625682a006ed37d/10
560a596bddd342862566c70062a743?OpenDocument)
> doing a 16->8 bit conversion seems not to work if the brightness
> values are negative [-32768..0[. Is any help out there ???

Similar Messages

  • Weird Differences Between 32 and 64 bit Conversions

    Windows Users:
    Ever convert exactly the same raw file using exactly the same parameters into 16 bits/channel images with Photoshop CS6 32 bit and 64 bit, overlay them, set the top layer to "Differences", then add a couple of extreme curves adjustment layers over the top?
    Amazingly, there's a difference between the two conversions.
    Even more amazingly, while there's a slight correllation to the image content, there are patterns in the differences that don't really match much of anything in the image, yet they differ from raw file to raw file.
    Try it some time.  If you can't reproduce it, I'll be happy to share my raw file.
    Here are two small images, the converted raw file, and the enhanced differences between a 32 and 64 bit conversion of that same raw file.
    Enhanced/blurred differences betwen the 32 and 64 bit conversions.
    Artificial intelligence trying to express itself?  Cosmic rays?  Kirlean messages from Beyond? 
    I happened to be using the 32 bit Photoshop today to access some plug-ins I only can get in 32 bits, converted the same image in both, and just wondered whether there might be differences.  You never know what you'll find until you look!
    Oh, and sorry Mac folks - I know you only have a 64 bit converter to play with.
    I'm reminded of the old adage:  A man with one watch knows what time it is.  A man with two watches is never sure.
    -Noel

    Thanks for your response, Eric.
    I understand what you're saying, but a several bit difference in a 15 bit result is actually quite a lot to be attributed to math inaccuracy due to floating point operations.  Even single precision 32 bit floating point operations should give you 24 bits of precision.  A few bits at the end shouldn't add up to a visible difference, unless decisions are being made in the code based on results (e.g., if noise > threshold, do something).
    Perhaps you don't use the Strict Floating Point Model, because if you did that should give you consistent results.
    I've been through the decision of which floating point model to use in my own software, initially thinking that the Fast model would be better for speed (obviously), but as it turns out the repeatability of results with the Strict model is well worth the tiny bit of extra time it takes for the results to be predictable.  You might want to reconsider using Strict, as I have, especially if these are not just round-off errors but decisions being made in the code based on floating point results.  Looking around on the web, I see more folks than just I have made that specific choice.
    Food for thought.
    -Noel

  • 16 to 8 bit conversion in Elements 6 Mac

    What is happening when a file is changed from 16 to 8 bits in Elements? Are the bottom 8 bits simply truncated out? Or is a dithering algorithm applied?
    Thanks.

    Copied from http://www.earthboundlight.com/phototips/more-8bit-versus-16bit.html:
    Assume you start with a 16-bit image filled with lots of good information with up to 65 thousand values per channel. Once you convert to 8-bit, you're down to only 256 possible values per channel, and you will lose all the extra shades that 16-bit had allowed you. Every shade will get collapsed down to the nearest 8-bit value and fractional data will be truncated. Even converting back to 16-bit later won't give you those shades back again. Your file will still only have 256 discreet values in it.

  • 32bit--- 64 bit conversion of 10g database

    Hi,
    I was aksed to migrate the database from 32 bit to 64 bit oracle database
    iam using 10.2.0.4 database running on 32bit redhat linux server
    Now i was asked to change the oracle 32bit --->64 bit
    i was provided a new 64 bit server with 64 bit rhel installed .
    Plese help me how can i perform the migration now.
    If anyone of you provide steps/Metalink ID it would be really very much helpful for me.
    Thanks,
    Aram

    Hi;
    Please see:
    Migrate-Move db
    How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database [ID 286775.1]
    Different Upgrade Methods For Upgrading Your Database [419550.1]
    Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2 [ID 454616.1]
    Debugging Platform Migration Issues in Oracle Applications 11i [ID 567703.1]
    9i Export/Import Process for Oracle Applications Release 11i [ID 230627.1]
    10g Release 2 Export/Import Process for Oracle Applications Release 11i [ID 362205.1]
    Also see below master note and link:
    Database Documentation Resources for EBS Release 11i and R12 [ID 1072409.1]
    10G - 11G with New Hardware
    Regard
    Helios

  • Hex to Bit Conversion and Back

    How can we convert a Hex to bit and Back and what is the return datatype.
    are there functions to do bitwise operations apart from bitand (I would like to perform OR,XOR,NAND operations

    Hi Naveen C Ram,
    For how to convert hex to rgb in Windows Phone, please try to refer to the reply posted by @Loukt in this thread:
    https://social.msdn.microsoft.com/Forums/en-US/f8f695b4-9f65-4f81-aada-20daf7a16599/convert-hex-string-to-color-without-systemdrawingdll?forum=wpdevelop
    For how to convert rgb to hex in Windows Phone, please try to refer to the following code:
    var color = new Color() {R = 255, G = 100, B = 100};
    var solidColorBrush = new SolidColorBrush(color);
    var hex = solidColorBrush.Color.ToString();
    Best Regards,
    Amy Peng
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Printing AutoCAD 2000 with 32 bit conversion

    i have AutoCAD 2000 that has been converted to 32 bit.
    I just bought a HP Officejet 6600 printer and so far I haven't been able to print my AutoCAD drawings.
    Can I provide any more information that will fix this problem?
    Gene Phillips

    Hi @Engene, 
    I see that you are having issues printing from your AutoCAD 2000 software. I can help you with this.
    I believe the AutoCad software uses PCL 6 Language, the printer is only using  PCL 3 enhanced Language. Most likely they aren't compatible together, since the language is different.
    You would need a printer that supports PCL 6 Language.
    Have a nice day!
    Thank You.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
    Gemini02
    I work on behalf of HP

  • Can I upgrade from Vista 32 bit to Win 7 64 bit on hp pavillion d4790y?

    I have 2 gig ram, but really need to speed it up to 8 gig. I'd like to also get win 7 for the desktop. Before I get any of this, I need to know since my current OS is 32 bit but 64 bit capable, if I install windows 7 64 bit will it upgrade from 32 to 64? And then will the 8 gb RAM actually work if I purchase that as well? Will I run into any problems the hardware inside (processors or motherboard (intel core 2 cpu [email protected] ghz))? I don't understand 32 bit to 64 bit conversions... Thanks
    This question was solved.
    View Solution.

    Hi:
    Yes, your hardware is 64 bit capable.
    You can install 4 x 2GB of PC2-6400 memory for best memory performance and capacity (8 GB).
    You cannot do an in-place upgrade of any 32 bit OS to a 64 bit OS.
    You will need to back up all your files, gather any program installation disks/files you want to reinstall on the 64 bit OS, and do what is known as a custom or clean install of Windows 7.
    If you live in the USA or Canada, this is the memory I recommend for your PC once you bring it to a 64 bit OS:
    4 of these: (the CAS latency is faster than the second link below)
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820220279
    or
    2 of these kits:
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820148160
    Paul

  • Does duplicating layers from an 8-bit PSD reduce the color depth of a new 16-bit edit?

    I have been using several versions of Photoshop over the years starting with CS3, then CS6, and finally CC. Through 2012, all my RAW files were converted using 16-bit color depth in ACR. However, in examining my files recently, I noticed all were in 8-bit color-depth, which likely happened once I upgraded to CS6 and ACR left 8-bit as the default. As such, I now have hundreds of images that I wanted to edit in 16-bits that have been converted to 8-bit starting with my ACR export. The files are all RAWs from cameras like the Canon 5D, 5D Mark III, Olympus OMD EM5, and Sony RX100.
    I have 8-bit PSDs of my edits of these files which contains multiple layers. I use features from the Nik and Topaz collections for noise reduction, contrast enhancement, black and white conversion, and sharpening. I also use layered Photoshop features like Selective Color. I want to know if I can simply duplicate the layers from the 8-bit PSD files onto a new 16-bit RAW converted background layer rather than re-edit each and every file. There would be no change in the crop of other reasons why the layers would not match up, but I am concerned that duplicating those layers might throw away the benefit of moving to 16-bit since they were created in an 8-bit color-depth.
    As a test, I looked at a few files where I had pushed the shadows and colors to an extent and duplicated the layers from an 8-bit conversion to a 16-bit conversion and could see no difference. I am not sure if this is because Photoshop has become very good at reducing color-depth without a perceptible difference or if I am throwing away information in moving the layers.
    I work off of a fairly modern computer and a top-end Eizo CG276 self-calibrating monitor and good video card. I am extremely sensitive to color and shade changes as well.
    Thanks for any help, I am a bit stressed at the thought of re-editing hundreds of files from step 1, but it is important to me that I not throw away color information and it is better I start re-editing now rather than years from now.
    Ketan

    While raw file contain a 8Bit jpeg previews image in some color space. The RAW sensor data is not a color image and has no color space and the bit dept of pixels depend on the manufacturing.  Most sensor produce one value for each pixel  10 to 14 bit seems to be common.   Sensors only measure the light intensity not color.  If front of each pixel sensor is a color filter which lets Red or Green or Blue light be measured for intensity.
    A RAW converter can convert your sensors RAW data into a color RGB image in any color work-space in 8 or 16 bit color depth.    No Adobe update went around and scanning your computer looking for 16bit color depth images and convert all of them to 8bit color.  Your ACR workflow settings may have been change from 16 bit color to 8 bit color by you or set to 8bit by some update or script used.  So you current ACR work-flow setting may be converint tou raw datya into 8bit RGB images. As long as you have you raw files you can reprocess these  to produce 16Bit color RGB Image by makinf the proper ACR work-flow settings.   You can also change your 8bit image to 16 bit image mode.  However this will not recover any color quality lost or posterization that has set in..  It will give you a bit more latitude using Photoshop filters which may help you not get banding posterization.

  • 32 bit integer size on 64 bit processor and OS

    Although not strictly a dbx question, I think the audience here is the correct one to bounce this off of:
    I'm curious: Why are 64 bit processes compiled with the -xarch=v9 switch having a 32 bit integer size, versus having 64 bit integer size?
    Although not cast in stone, and implementation dependent, an "int was originally intended to be the "natural" word size of the processor - to use the processor's "natural" word size to improve efficiency (avoid masking, etc).".
    I know you 'force' more 64 bit use (see some of Sun's doc on this below).
    ===============
    The 64-bit Solaris operating environment is a complete 32-bit and 64-bit application and development environment supported by a 64-bit operating system. The 64-bit Solaris operating environment overcomes the limitations of the 32-bit system by supporting a 64-bit virtual address space as well as removing other existing 32-bit system limitations.
    For C, C++, and Fortran software developers, this means the following when compiling with -xarch=v9,v9a, or v9b in a Solaris 7 or Solaris 8 environment:
    Full 64-bit integer arithmetic for 64-bit applications. Though 64-bit arithmetic has been available in all Solaris 2 releases, the 64-bit implementation now uses full 64-bit machine registers for integer operations and parameter passing.
    A 64-bit virtual address space allows programs to access very large blocks of memory.
    For C and C++, the data model is "LP64" for 64-bit applications: long and pointer data types are 64-bits and the programmer needs to be aware that this change may be the cause of many 32-bit to 64-bit conversion issues. The details are in the Solaris 64-bit Developer's Guide, available on AnswerBook2. Also, the lint -errchk=longptr64 option can be used to check a C program's portability to an LP64 environment. Lint will check for assignments of pointer expressions and long integer expressions to plain (32-bit) integers, even for explicit casts.
    The Fortran programmer needs to be aware that POINTER variables in a 64-bit environment are INTEGER*8. Also, certain library routines and intrinsics will require INTEGER*8 arguments and/or return INTEGER*8 values when programs are compiled with -xarch=v9,v9a, or v9b that would otherwise require INTEGER*4.
    Be aware however that even though a program is compiled to run in a 64-bit environment, default data sizes for INTEGER, REAL, COMPLEX, and DOUBLE PRECISION do not change. That is, even though a program is compiled with -xarch=v9, default INTEGER and REAL are still INTEGER*4 and REAL*4, and so on. To use the full features of the 64-bit environment, some explicit typing of variables as INTEGER*8 and REAL*8 may be required. (See also the -xtypemap option.) Also, some 64-bit specific library routines (such as qsort(3F) and malloc64(3F)) may have to be used. For details, see the FORTRAN 77 or Fortran 95 READMEs (also viewable with the f77 or f95 compiler option: -xhelp=readme).
    A: No program is available that specifically invokes 64-bit capabilities. In order to take advantage of the 64-bit capabilities of your system running the 64-bit version of the operating environment, you need to rebuild your applications using the -xarch=v9 option of the compiler or assembler.

    I think that this was basically to keep down the headaches in porting code (and having code that will compile to both 32bit and 64bit object files). int is probably the most common type, so by keeping it at 32bits, the LP64 model has less effect on code that was originally written for 32bit platforms.
    If you want to have portable code (in terms of the sizes of integral types), then you should consider using int32_t, int64_t etc from inttypes.h. Note that this header is post-ANSI C 90, so might not be portable to old C/C++ compilers.
    A+
    Paul

  • I have 64-bit Vista on an HP laptop, and am having trouble with P4

    I have 64-bit Vista on an HP laptop, and am having trouble with P4. When I go to the "Compatibility" tab in for the P4 .exe file, 32-bit Vista does not show up as a compatibility option. The latest version shown is XP with SP2. Am I missing something? Is there a patch or download that updates P4?(The problem I'm having is stuttering and stopping in the imported avi2 files I have downloaded from my Panasonic digital camcorder and converted from avi to avi2 using DVDate. Also, I get stuttering with digital photos - jpg files - that I want to insert in the movie with added narration.)

    A.T. Romano
    It gets a little complicated. This is a simplification, but I hope this
    helps to visualize what's happening.
    The 32 bit and 64 bit environments are separated from each other.
    There are separate program files directories, separate shared dll library
    directories, and even separate registry entries. You can't mix 32 and
    64 bit modules, and some applications, like Internet Explorer, install
    both a 32 bit and a 64 bit version on the same computer. (IE does that
    so it can process web pages that use 32 bit executable controls).
    Vista 64 has 32 bit emulation built-in using a subsystem called Windows
    on Windows 64 or WOW64 for short. WOW64 intercepts 32 bit
    application calls to the operating system and handles the 32 to 64 bit
    conversions and the redirection of the file and registry locations.
    The program files themselves contain flags that indicate whether they
    are 32 bit and whether the file uses the 64 bit version of the file
    structure.
    If the program is marked 32 bit, it will run in 32 bit using WOW64.
    Programs are generally installed by an installer program which places
    the application files in the proper locations and writes the registry
    entries and any other setup tasks that are needed. To install a 64
    bit program, a 64 bit installer is needed. For 32 bit installers, the files
    and registry entries are redirected to the 32 bit locations.

  • How do I reduce the bit depth of images to 1-bit within Acrobat 9?

    I am hoping a simple solution exists within Acrobat 9 for reducing the bit-depth of images to 1-bit.
    I know of two methods that both seem more like workarounds. One, edit the image using Photoshop. Two, without Photoshop, export the page as a 1-bit PNG and recreate the page in Acrobat. It seems like one of the preflight fixups should be able to get it done with the right settings. But, it's a labyrinth of unfamiliarity.

    There's no predefined 1-bit conversion in Preflight because it doesn't make sense. Preflight will not dither bitmaps, so most images will become black squares. Extreme color conversion is only intended for text/vector objects.
    If you want to try it anyway, you can create a custom Fixup if you have a  1-bit ICC profile.
    Preflight > Single Fixups
    Options menu > Create new Preflight Fixup
    Name it something like "Convert all to 1-bit BW"
    Search for "Convert colors" in the type of fixup box and add it
    Destination tab > Destination > your ICC profile for 1-bit black
    Uncheck "use destination from Output Intent"
    Keep everything else as default, though I'd suggest using "Embed as output intent for PDF/X" if you're making PDF/X documents
    Conversion Settings tab > All Objects + Any Color (except spot) + Convert to destination + Use rendering intent
    Press the + key to duplicate this step, and change the second copy to "Spot Color(s)"
    Press + again and change the third copy to "Registration color"
    Save the fixup and run it.
    In case you don't have a 1-bit  ICC profile installed, I've attached one.

  • ArrayToImage VB 16 bit data.

    I have IMAQ3.3.  (no vision license).    I can collect (via framegrabber) and display  8 bit data and save it via the ImageToArray function.   Then I can read this data back and display it at a later time using the ArrayToImage function.
    However,  after saving 16 bit data,  I cannot read it back as 16 bit data.   The ArrayToImage function always truncates it to 8 bit data!!!!   How can I get it to read 16 bit data?      The image it is reading from is an array of 16 bit data.

    CWIMAQ 3.3
    The following will read data from a file assuming it is 8 bit data.   I assumed if I changed ImageArray and B1 to 16 bit integers it would read as 16 bit data.   Instead....it reads as 16 bit data....then truncates to 8 bit as it puts the data into CWIMAQ1.
    I only use this for debug purposes....   We usually collect and run the data live, which works fine.     However, I can save the collected data and "play it back"  to check my software processing.   I need to put it into CWIMAQ1 since this is the structure that it gets when running "live".     When I try to modify the CWIMAQ1 parameters to indicate a 16 bit image I get something like "vision license required".       I can read the image in as 8 bit data and do all the 8 to 16 bit  conversion myself (my fallback which works fine).       I Just wanted to know if this was indeed a "licensing issue"   or if there was something I was missing.
    Private Function GetNewImages(ByVal ImageFile As String)
       Dim k As Long
       Dim l As Byte
       Dim ImageArray(4096, 2050) As Byte
       Dim B1 As Byte
       CWIMAQ1.Images.RemoveAll
       CWIMAQ1.Images.Add (mlNumActiveFrames)
       Open ImageFile For Binary As #97
        For k = 1 To mlNumActiveFrames
       For l = 1 To 20
         Get #97, , B1
       Next l
          Get #97, , ImageArray
           If EOF(97) Then
                  MsgBox "Error reading " & ImageFile, , " "
                  GetNewImages = False
                  Exit Function
                End If
          CWIMAQ1.Images.Item(k).ArrayToImage ImageArray
       Next k
       Close #97
       GetNewImages = True
    End Function

  • Yet another question about 24-bits

    I have just combined an Echo Audio Indigo IO with Adobe Audition 1.5 in hopes to have a reasonably good laptop recorder. That need is met quite well.
    There are a number of discussions about bit depth in these forums. The answers become obfuscated by internal conversions done in audio cards, software, and Windows drivers. My current interest is simply how to get 24 bits of unadultered data from a D/A convertor to a data file using Audition. This data will be used to analyze pitch, frequency and intonation and any changes to the data will affect the results.
    Both the Indigo IO and Audition claim 24-bit support and I am convinced that both meet these claims fully. In spite of this there is no evidence that the card, Windows, and the software together will actually meet this claim without a 16-bit conversion along the line. What really matters is whether or not 24-bits of audio can travel from the card to the data file without any changes.
    Question 1: In Audition 1.5 for Windows under "Options/Device Properties/Wave In/Supported Formats" for the Indigo IO the highest density recording listed is 16-bit stereo at a 96K sample rate. What properties do other users find for their 24-bit cards?
    Question 2: The wave programming calls (waveInGetDevCaps) don't include capability definitions for anything beyond 16-bit depth at a 96K sample rate. Where in Windows is there evidence of true support for anything greater than 16-bit Audio?
    Question 3: What software is there that will go directly to the sound card or driver and write 24 unadultered bits to a file?

    I had trouble setting up my 24bit/96k ADAC to record 24-bit in audition. The best way to be sure audition is getting the 24bit samples as 24 bits is to check the status bar when recording. If it says 'Recording as 24 bit' then it's all setup right. If it says 'Recording as 16 bit' then the 32-bit float driver or the 24-bit driver reported an error and audition tried to fall back on the 16 bit version and succeeded. The problem I had was that the driver didn't handle 32-bit floats so it errored out and audition recorded as 16 bits. The only solution was to select the 24-bit option under "Options/Device Properties/Wave In(or out)".
    Just a note. A true 24-bit ADC will send 24 bits to the driver and the driver itself will convert the bits to whatever format it supports. If the driver is instructed to convert to normalized floats the software will simply do something like this:
    SampleOut = SampleIn / 8388608.0;
    This doesn't add any coloration to the audio because a float is really just a 24 bit integer with 8 bits added to tell the FPU where to put the decimal point. Converting down from 24-bits of course degrades quality and converting from 24 bits up to a true 32 bit integer would NOT add any quality to the audio. When stored in a file, 24 bits is 24 bits whether its a float or an integer.
    >Question 2:The wave programming calls (waveInGetDevCaps) don't include capability definitions for anything beyond 16-bit depth at a 96K sample rate. Where in Windows is there evidence of true support for anything greater than 16-bit Audio?
    The website showing waveInGetDevCaps function information doesn't show all the possible format tags or even the sample rates. It is only showing the most compatible forms for drivers. The WAVEFORMAT and WAVEFORMATEX structures used for the drivers are also used in the 'fmt ' chunk of RIFF WAVE files. I've written a lot of software for parallel use with audition using wave files and windows and I can say from experience that when recording 24 bits with audition you're getting the full 24 bits. If your driver truly supports 24 bit samples then audition will allow you to select the 24 bit samples option under "Options/Device Properties/Wave In(or out)" and audition will display 'Recording as 24-bits' when recording. Once recorded Audition no doubt converts ALL bit depths to its internal format which I believe is 32 (or 64) bit float, and if you don't process the file and wish to save as 24 bit integers then the samples saved will be exactly what the ADC sent to the driver.

  • FM for OTF to TIFF format conversion?

    Hi,
    Any function module for OTF to TIFF(image file ) format conversion or anything to do it.
    Thanks
    Vaibhav

    hi,
    please check this link.. this contain some function moule to convert OTF -> other formats .. Not sure of TIFF
    [http://help.sap.com/saphelp_45b/helpdata/EN/08/e043c4c18711d2a27b00a0c943858e/content.htm]
    safel

  • Role of graphics card, processors and ram in conversions?

    Does anyone know what role if any a computers graphics card, processing power and ram has in the quality of conversions? I don't mean cases where a computer is unable to run a program (or handle a conversion in a reasonable time period). I mean will the same conversions (same settings, files, formats) look better if done on more powerful computers. I'm using final cut studio and downconverting from HD to SD using quicktime and converting from PAL to NTSC using compressor.
    my specs are 2 2.6 processors, 4 ram,
    graphics: NVIDIA GeForce 8600M GT with 512MB of GDDR3 SDRAM and dual-link DVI

    What compressor settings do you recommend?
    The best workflow I've come up with is qucktime conversion from HDV PAL to SD 10-bit PAL, then compressor conversion from 10-bit Pal to DVDPR0 50 NTSC.
    By coincidence I decided to try the initial conversion (HDV to 10-bit) this morning on compressor (but I didn't change the frame rate; i didn't know they were relavant for HD downconvert; I only been change them for the PAL-NTSC conversion).
    At first glance, the compressor 10-bit conversion doesn't look better than the quicktime conversion (due to the frame rate settings?). I don't know for sure because I haven't sent it back to compressor for PAL-NTSC conversion and on to DVD SP.
    One issue I'm worried about is the dimensions. quicktime's 768 x 576 (preserve aspect ratio checked; letterbox selected) was the only dimension setting that produced a film that looked right (on the dvd; it's squished in final cut). I don't see that option in compressor. I suppose I could type it in the frame size boxes.
    Anyway, I would be most grateful for answers to these questions. I've spent a few weeks on this and am past the point of frustration. I just plod along, zombie-like, trying a few different things every day, dimly hoping to hit the jackpot eventually. Actually, I've been planning on trying to downconvert using my HDV/DV deck (print to tape in HDV; then change settings to downconvert and capture as DV). Would that help?
    how do pro studios handle downconverting? What kind of hardware do they use? I'm an independent filmmaker, working on a small budget for the time being, but eventually, once the film is over and (hopefully) I have more money, I'll be willing to pay for a good pro conversion. Do you know what that costs? The film will be 80 minutes long.
    thanks again for your help.

Maybe you are looking for

  • Help need in Adding External jar files.

    Hi all, Iam developing ADF application using JDEV 11.1.1.3.0. I need to move files from one directory to another so i have used external jar "commons-io-1.4.jar" file. i did this by going to the project properties -->Libraries and Classpath -->Add Li

  • FCE/FCP converstion

    Is it possible to convert log files from final cut express to final cut pro and vis-a-versa. If so, is it possible to do this across versions, and if not, which version of FCE corresponds to FCP? Cheers!

  • I am using a new Mini Mac via HDMI, I get the First gray screen then nothing else

    I am a First time user of a Mac and I've gone toturn it on fr the first time today, i conected it to a HD ready LCD tv via HDMI and get the first gray 'loading' screen then an 'English' over the speekers then nothing else Any help would be apreciated

  • Hard Disk Space ? Confused!!

    I don't know if this makes any sense, but I only have 1500 songs in itunes and not many pictures on iphoto but I get a message saying that my hard disk is almost full when I try to download or update something. My friend has over 10,000 songs on her

  • Drop Down Box Choose A Name and Have only the first name show up

    I am creating a form for health care. I have a drop down box with names of Physicians... their full names so they can be easily identified. And then subsequent fields that will be populated based off of the choice. However What I need is that after t