8 bit or high precision

Question for those who edit a lot of miniDV-
Which render setting do you use or recommend? The 8 bit or the high precision YUV and is there a big difference between the 2? Seems like the high precision is kind of like polishing a turd, so to speak.

I rarely use 10 bit rendering, there are some bugs associated with the high precision, and the return is seldom there.
Patrick

Similar Messages

  • FCP6 any problem using "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing

    I have a long and complex 1080p FCP6 project using ProRes442.  It is made up of mostly high resolution stills and some 1280i video clips.  Rendering has  laways been anightmare.  It takes extremely long and makes frequent mistakes whch have to be re-rendered.   Just today, I discovered the option of selecting  "Render in 8-bit YUV" instead of "Render 10-bt material in high-precision YUV" video processing.  The rendering time is cut down to a fraction and even on a large HD monitor I can tell no difference in quality.  I am getting ready to re-render the entire project in 8-bit and just wanted to check if changing to 8-bit would pose some problems and/or limitations that I'm not aware of.  This is not a broadcast or hollywood film thing. But it does represent my artwork and I burn it to bluray so i do want it to look good.  Lke I said, I can tell no difference between the 8-bit and 10-bit color depth with the naked eye.  Thank you all for all the help you have always given me with my many questions in the past.

    Unless you have a 10bit monitor (rare and very expensive) you can not see the difference as your monitor is 8 bit.
    10 bit is useful for compositing and color grading. Otherwise, 8 bit it fine for everything else.
    x

  • Render in 8-bit YUV or Render all YUV material in high-precision YUV

    Friends,
    I'm a wedding videographer and I work with the mini dv cams PD150. In a particular work I will have a lot (I mean a LOT) of color correction to do, this way, should I work my sequence with the setting Render in 8-bit YUV or Render all YUV material in high-precision YUV ?
    Thanks again!!!

    I read your commentary:
    "Selecting this option does not add quality to clips captured at 8-bit resolution when output back to video; it simply improves the quality of rendered effects that support 10-bit precision."
    So, 3 Way Color Corrector is a effect that support 10-bit precision? If I use "Render all YUV material in high-precision YUV" will the video look nicer?
    Thanks again

  • High precision YUV

    I shoot DV SD PAL 8 bit. I put two or three filters on a clip. Does those effects will better looking when I run High precision YUV instaed of 8 bit YUV. The final output is DVD. Or can this make thing worse when running High YUV. Render time does not matter.

    But in Apple manual says:
    "This is the highest-quality option for processing video in Final Cut Pro. This option
    processes all 8- and 10-bit video at 32-bit floating point. In certain situations, such as
    when applying multiple filters to a single clip or compositing several clips together, a
    higher bit depth will improve the quality of the final render file even though the
    original clip has only 8 bits of color information "
    To me it mean I will get better result or... ?

  • Is there a way to view timestamps in DIAdem with a higher precision than 100 microseconds?

    I understand DIAdem has limitations on viewing timestamps due to DateTime values being defined as a double value and that this results in 100 us resolution.  Is there any way to get around this?  I am logging time critical data with timestamps from an IEEE 1588 clock source and it is necessary to have higher resolution.  Perhaps I could convert timestamp to a double before logging but then would have to convert it back in Diadem somehow...
    Thanks,
    Ben

    As you said, DIAdem can only display up to 4 decimal positions on a timestamp. Timestamps in DIAdem are recorded as the number of seconds since 01/01/0000 00:00:00.0000. To achieve a higher precision, it would be necessary to use a relative timestamp. Many timestamps are defined from different references anyway, so it might be possible to import the timestamps as numeric values to maintain their precision. Converting the timestamp prior to importing into DIAdem seems like a viable method of working around the precision limit.
    Steven

  • Request for info on fatal error handling and High-Precision Timing in J2SE

    Hi
    Could anyone please provide me some information or some useful links on fatal error handling and High-Precision Timing Support in J2SE 5.0.
    Thanks

    Look at System.nanoTime

  • FYI - High precision data issue with sdo_anyinteract on 11.2.0.3 with 8307

    For anyone that may happen to be experiencing issues with SDO_ANYINTERACT on 11.2.0.3 with high-precision geodetic data, I currently have a service request in to fix it.
    The issue we have is with locating small polygons ("circles" from 0.5"-4") in the 8307 space. The metadata we have specifies a 1mm tollerance for this data, which has worked fine since (as I remember) 10.1. Support verified it works fine up to 11.2.0.2, then is broken in 11.2.0.3.
    So if you are pulling your hair out - stop. ;-) The SR# is 3-5737847631, and the bug# (will be) 14107534.
    Bryan

    Here is the resolution to this issue...
    Oracle came back and said what we have at that tolerance is unsupported and we were just lucky for it to have worked all these years. They are not going to fix anything because it technically isn't broke. We pointed out that the documentation is a little unclear on what exactly supports higher precision, and they noted that for future updates.
    When asked if they would entertain a feature request for a set of high-precision operators (basically the old code) in future release - they basically said no. So for the few items that we much have higher precision - we are on our own.
    What still makes us puzzled is that apparently no one else is using high-precision data in lat/lon. Amazing, but I guess true.
    Anyhow, here is what we used to use (up to 11.2.0.3) which worked fine at a 1mm tollerance:
    Where mask_geom is:
    mask_geom      :=
             sdo_geometry (2001,
                           8307,
                           sdo_point_type (x_in, y_in, 0),
                           NULL,
                           NULL);
    SELECT copathn_id
      INTO cpn
      FROM c_path_node a
    WHERE     sdo_anyinteract (a.geometry_a2, mask_geom) = 'TRUE'
           AND node_typ_d = 'IN_DUCT'
           AND ROWNUM < 2;Basically this finds indexed geometry and compares it to a single mask geometry (a simple point for the x/y given). Only one row is returned (in case they overlapped duct openings - not normal).
    Since this no longer returns any rows reliably for items less than 5cm in size, here is our work-around code:
    SELECT copathn_id
      INTO cpn
      FROM (  SELECT copathn_id,
                     node_typ_d,
                       ABS (ABS (x_in) - ABS (sdo_util_plus.get_mbr_center (a.geometry_a2).sdo_point.x))
                     + ABS (ABS (y_in) - ABS (sdo_util_plus.get_mbr_center (a.geometry_a2).sdo_point.y))
                        distdiff
                FROM c_path_node a
               WHERE sdo_nn (a.geometry_a2,
                             mask_geom,
                             'distance=0.05 unit=m') = 'TRUE'
            ORDER BY distdiff)
    WHERE node_typ_d = 'IN_DUCT'
       AND ROWNUM < 2;Essentially we use sdo_nn to return all results (distance usually is 0) at the 5cm level. At first we though just this would work - then we found that in many cases it would we return multiple results all stating a distance of 0 (not true).
    For those results we then use our own get_mbr_center function that returns the center point for each geometry, and basically compute a delta from the given x_in,y_in and that geometry.
    Then we order the results by that delta.
    The outer select then makes sure the row is of the correct type, and that we only get one result.
    This works, and is fast (actually it is quicker than the original code).
    Bryan

  • Higher precision of timestamp when wrinting to txt

    Hello,
    I would like to have a higher precision (--->miliseconds) of the timestamp when saving waveforms to .txt. Labview only prints the date and the time in HH:MMS. As I am acquiring data with a rate of 1k, 1000 data values have the same time description in the .txt-file.
    Note: This problem only occurs when writing to .txt, it is no problem to get a higher precision by using the graph or the chart. 
    Any help or suggestions would be appreciated.

    Thanks so far.....
    Maybe I was not precise enough. What I am looking for is the opportunity to easily manipulate the format of  the timestamp, which comes with my data and then write it to .txt. I already used the "Format Date/Time String"-VI to get the time with the miliseconds part and joined this time information with the data of the waveforms, which I also had to extract from the waveforms before, afterwards, but I thought there would be a more elegant way, because if I can extract the ms-part from the timestamp it must have been in it before, right ? ;-) So why can´t I tell Labview to also display the ms-part, when using the "write waveforms to .txt"-VI? I attached a .txt-file with a short excerpt of data, which should visualise the problem.
    Regards
    Message Edited by Marauder on 03-10-2006 03:20 PM
    Attachments:
    data_with_same_timestamp.txt ‏10 KB

  • Render in high precision YUV

    Hello,
    My FCP (version 6.03) project contains several scenes. Each scene, which is made up of SD PAL clips, resides in its own sequence. I’m rendering each sequence in high precision YUV and nesting all of the sequences together in a new sequence. This parent sequence will be exported to Compressor. Should this parent sequence be rendered in high precision YUV too?
    Thanks in advance

    I'd make them the same...
    Ya know, you don't HAVE to nest these sequences to a master. You can copy and paste the clips from the various scene sequences too... That way you know they are using the renders from the earlier sequences whose settings are exactly the same all the way down the line. But match these sequences exactly as far as all settings go.
    Jerry

  • High precision YUV rendering

    May seem a excessive....
    I have a long project and I have decided to divede it in smoller videoclips for the final dvd, so I created new sequences for every new videoclip (portion of the previous bigger one). The original was rendered in high precision YUV and I forgot to modify the settings in the new sequences before pasting. I pasted, then modified the settings and then re rendered... is it ok anyway? I noticed that if you paste after changing the settings no rendering is required - I just want to be sure that the final result is the same.
    Thanks

    though if any changes are made to the sequence, it would kill the render, correct?
    I just am seeing how exporting this with my color correction is taking 7 hours for my system to do.. I'd love to know if there is some secret way to have rendered files saved so that it doesn't take 7 hours every time, in case I have to make a change.
    -p

  • High Precision Voltage Source to Calibrate 24 bit ADC

    I am trying to find a cost effective way to calibrate a 24 ADC with a voltage source.
    The ADC has 3 differential inputs.It ranges from 40, 20, 10, 5, 2, 1, 0.5Vpp
    It natively samples at 32 kHz, But I will be taking a 1 second sample as a reading for the calibration.
    The source must be able to produce half full scale of each range ie 20V to 0.25V differential.
    The source must be able to produce the signal to +-0.01% (ie. 20V +-0.002V to 0.25V+-25mV)
    I was looking at the PXI-4132. I will have to buy the PXI frame and controller.
    Is there a cheaper or better solution for this task. I also looked at the Keatley $6000 and Agilent solutions $5750.
    A SMU appears to be more feature rich then what is called for in this task.
    This is my first project please be critical of my post so that I can improve.

    Two ways: Get a Reference source (have a look at the secondary market like this  , I think you will need a recalibration anyway)
    or get a stable source and a reference voltmeter.
    also have a look how other solved this problem .. like Jim Williams from linear see
    http://cds.linear.com/docs/Design%20Note/dsol11.pdf
    http://cds.linear.com/docs/Application%20Note/an86f.pdf
    However as Lynn already pointed out 16bit seems all you need.
    And: Usually an ADC used ratiometric migth go up to 22bit .. not ratiometric it will need a reference better than 22bit (ok 16bit here, that can be done)   . So if your ADC will work ratiometric in your application all you need is a stable voltage divider since reference sources are available ....
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • Playing 20 bit or higher audio from CD directly

    Will my revision A imac G5 play 20 bit audio CD's at 20 bits or does it downgrade them to 16 bit. I use the fiber optic audio out to my receiver and have good speakers. Can iTunes import 20 bit audio files as 20 bit? What do you think of apple lossless at 700kbs compared to 1.6 mps of uncompressed audio or even 20 bit? It is like computers have drifted away from audio quality to save disk space but hard drives are getting bigger and bigger so this shouldn't be an issue anymore! Internet speeds are up also, mine is 15mps a second and can download large files very fast but can't find high quality audio on the internet that is near 20 bit.

    Hi Paul Riley-
    I am a conspiracy theorist nut-case for sure, but I have been convinced that MP3 technology was a corporate scam from day one intended to dumb-down the ears, but don't get me started (;>)
    These old ears still prefer the dynamic range of reel to reel tape.
    These type of things you ask are driven by consumer demand. I think it is old dudes like myself that will eventually drive the market towards something a little more "pure".
    Luck-
    -DaddyPaycheck

  • Windows 7 Ultimate 64-bit Random High CPU Usage 80%-100% @ idle

    I am running Windows 7 Ultimate and my specs are AMD Phenom 9550 Quad-Core Processor 2.20 GHz with 6.00 GB RAM. The computer randomly spikes up to 80%+ and usually higher than that and the computer runs really slow. Ived tried checking for spyware, or anything and nothing is found. This happens at idle and during normal activity. Restarting usually solves the issue but sometimes it doesnt. CPU isnt running hot at all, the drivers installed are that of Windows 7 installing at the clean install. I used process Explorer and found out it is:
    CPU
    15.6-20.10 Hardware Interrupts
    52.0-65.00 Deferred Procedure Calls
    This is what is making my usage so high, even makes my idle @ 25 and this is when nothing is running. Normal idle is from 0%-10% max but normal is like 6% and computer was perfect with Vista. Upgraded about 4 days ago.
    Here is a screen shot.
    http://i207.photobucket.com/albums/bb171/mike96z/ProcessExplorer.jpg
    [IMG]http://i207.photobucket.com/albums/bb171/mike96z/ProcessExplorer.jpg[/IMG]
    Any help is appreciated.
    Thanks
    Mike

    Hi Mike,
    Yes, you can try Mark's suggestion as I noticed the similar issue regarding SPC was resolved by the hotfix.
    In other case, you can also try the following steps to narrow down the root cause:
    Test in Safe Mode:
    Restart the computer and start pressing the F8 key on the keyboard. On a computer that is configured to boot multiple Operating Systems, press the F8 key when you see the boot menu. When the Windows Advanced Options menu appears, select Safe Mode, and then press Enter. Log onto Windows by using the Administrator account or any user account with Administrator privileges.
    If issue disappear in Safe Mode, narrow down in Device Clean boot:
    Clean Boot
    =============
    This method will help us determine if this issue is caused by a loading program or service.
    1. Click "Start",and type "msconfig" (without the quotation marks) in the open box to start the System Configuration Utility.
    2. Click the "Services" tab, check the "Hide All Microsoft Services" box and click "Disable All" (if it is not gray).
    3. Click the "Startup" tab, click "Disable All" and click "OK".
    4. Click "OK" to restart your computer to Selective Startup environment.
    5. When the "System Configuration Utility" window appears, please check the "Don't show this message or launch the System Configuration Utility when Windows starts" box and click OK.
    6. Check whether or not the issue still appears in this environment. 
     

  • Using FF 12 32-bit in Windows 7 Pro 64-bit cause high mem usage

    AVG gave an alert msg saying "detected high memory usage.
    256 MB".
    It suggest to close and open again.
    Also I notice highest page fault.
    I didn't do anything, just continue to surf as usual.
    I have 4 GB of RAM.
    Do I need to be concern about this?
    Is there a solution to fix this?
    Thanks.

    256MB of Ram isn't really "high" (depending on what you are doing). To put it into perspective. You aren't even using a 16th of your available RAM. AVG is a very poor system optimizer and anti-virus, and often provides false positives.
    You can try doing things like uninstalling add-ons you don't use: [[Disable or remove Add-ons]]
    Update your graphics Driver: [[Upgrade your graphics drivers to use hardware acceleration and WebGL]]
    Install all Windows Updates, and then test again.
    How many tabs and what were you doing when you received this alert? Did Firefox continue to run normally?
    You can view at any time how much memory Firefox is using by typing about:memory into the address bar.

  • Is it possible to have a dial which allows high precision fixed values?

    OK, I need to have two dials.  Both have only specific values allowed.
    On allows values in a certain range, with increments of 0.25.  That is, I want to allow 7.25 and 7.5, but not, for example, 7.3. 
    I did this successfully by choosing an increment of 0.25 in the Data Entry tab of the properties, and setting it to coerce.
    My second dial needs values in a certain range (0 to 359.912109,  if you must know ), with an increment of
    0.087890625  (exactly).  When I enter this, it gets rounded off to 0.0879, which will not give me the values that I want.
    What can I do?
    Thank you.
    B.

    BPerlman wrote:
    1.  I can't choose fixed point representation when I have a dial or knob control.  the "FXP" option is disabled.
    2.  The help tells me to set the range and desired delta, but they seem to be read only!
    3.  Why do you suggest a range of 0..1 and multiply by 360, rather than a range of 0..360 in the first place?
    Interesting. I wasn't aware if this limitation. I guess it only works for simple controls. (Here's an idea!)
    You cannot set the range and delta, they are the results of the "word lenght" and "integer word lenght". These are tightly limited by the underlying binary representation.
    In this particular case, you could chose a word lenght of 12 bits and integer word lenght of zero bits, giving you an increment of exactly 1/4096 (The display is unfortunately rounded to four decimal digits and does not show the full resolution) (Here's another idea!)
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for