32 bit Floating Point

Hello,
Running FCP 5.1
Having audio sync issues and was double checking my settings.
Although the sequence presets are at 16 bit, they are showing up in the browser as 32bit Floating Point.
Any thoughts?
I generally capture now at 30 minute increments and actually have always had this issue. FCP 4.5 and 5.1
all settings are where they should be.
although I do notice, obviously when the device is off, the audio output defaults to 'default' not to firewire dv.
thanks
iMac intel Mac OS X (10.4.8)

Some more details please. What hardware device are you sourcing the audio clips from? The likely culprit is your capture settings. What preset are you using? Check Audio/Video Settings-Capture Presets and see if the preset you've selected records audio as 32 bit. It will say in the right column after you've selected your preset.
If it says 32 bit there, click Edit to get the Capture Preset Editor. Under Quicktime Audio Settings, the Format field should give you a selection of sample rates and possibly alternate bit depths. If your only choice is 32 bit, (as it is for me when I capture audio via my RME, 32 bit Integer in my case) then you'd be well served by bringing those files into Peak or Quicktime and saving them as 16 bit Integer files to match your sequence settings.
If you've imported these files into FCP from an audio editor that can create 32 bit floating point audio files, eg Kyma, Sequoia, Nuendo, etc. then the same advice applies. The 32 bit files are much larger than they need to be and may upset the apple cart (he he, pun) when pulled into a sequence with different settings. More cpu overhead for sure.
Let us know what you find.

Similar Messages

  • Sequence audio format stuck on 32-bit Floating Point

    Hi,
    The audio format for all my sequences is set to 32-bit Floating Point when I check them in the browser's Aud Format field. When I manually change the format in the sequence settings nothing changes in the browser and choosing a new sequence set up doesn't seem to help. What I don't understand is that there isn't even an option for 32-bit Floating Point in Audio Settings. Anyone else experience this?
    I am using v6.0.6 on osx 10.5.8
    Thanks,
    Tom

    16 bits per channel equals 32 bits stereo. Floating point means that you can, but don't have to, have a non-integer value for your audio.

  • Photoshop CS3 vs CS2 32 bit floating point tiff loading

    Hi,
    The application I'm developing exports 3 channel 32 bit floating point tiff's.
    The exported files could have been loaded into PS CS2, but PS CS3 can't load them anymore, all I get is a message box with this message:
    "Could not complete your request because of a problem parsing the TIFF file."
    I've uploaded the file here, so you guys can have a look at: www.thepixelmachine.com/dispmap.tiff

    Where should I've posted this thread instead? I suppose people here write plugins by programming them (right?), so they suppose to be programmers, not regular PS users. I thought a programmer would be much more familiar with programming related issues, such as tiff file saving c++ routines. Correct me if I'm wrong.
    The people who tested the file have Photoshop CS3 and didn't manage to open it, they even sent me screenshots with the error message box. In CS2 however, the file loads perfectly. Also, the single strip version opened just fine in CS3.
    You may close this thread, the problem was solved and more than that it looks like I haven't posted it in the right forum.
    Thanks.

  • 128-bit floating point calculations

    I'm looking to buy a used SPARC. Which model is the oldest that provides
    C or fortran 128-bit floating point calculations? Does every 64-bit CPU
    have 128-bit quad real numbers?
    Ron

    The AMD 64-bit processor does not support 128-bit
    floating numbers. I need a Sparc processor that
    does.And yet your question has just what to do with Java -- why did you create a userid and post this question in the Java forums?
    A response "Does every person have ten fingers?"
    shows me that you don't know much about writting
    computer programs that have more than 16 significant
    decimal digits.I wouldn't infer that - but now that you've proven to be a whiny little snot, I'm sure everyone is just going to want to help you here.

  • Help me changing 32-bit floating point into 16bit

    When I open a new project, the browser gives the following information:
    Audio format: 32-bit floating point.
    Even when I changed the sequence settings (under sequence > settings, or under final cut pro > audio/video settings > sequence presets > edit) to 16 bit, the 32-bit floating point in the browser stays the same.
    I would love to change this into 16 bit. Can anybody help me?
    Thnx

    You can't change it. FCP is 32-bit float internally. What are you trying to do by changing it?

  • Using iMovie footage in FCP: 32-bit floating point audio?

    I'd like to use iMovie 7 to capture my DV footage because I like its cataloging and skimming features, and the way it splits up clips based on DV stops. I want to do my editing in Final Cut Pro 5. Unfortunately, I've found that the audio depth in the clips I've captured with iMovie is 32-bit floating point, rather than 16-bit integer, and FCP has to render the audio before it can play back. Strange, since the FCP browser says that the sequence I dropped the iMovie footage into IS 32-bit. Any ideas on how to get FCP to playback this bit depth without rendering, and without having to convert my footage?

    Do a search here and you will find all kinds of post on this very subject.
    The main problems with using iMove to capture for FCP are
    1. iMovie and FCP capture in very different ways. iMovie captures footage as a DV stream, problematic for editing in FCP
    2. iMovie captures will not give you the timecode from your tape... may or may not be important to you unless you ever need to recapture and reconnect the footage.
    In short, if you are editing in FCP... capture with FCP. If you want to split the footage based on tape start/stop, use the Start Stop detect function after you have captured.
    rh

  • 32 bit floating point ... SLOW...

    Hi,
    I ran a little test because i found Motion took too much time exporting with 32 bit floating point.
    I made a single layer, animated text in Motion with my Quad ( 2.5 G RAM)
    When i exported 32 bit floating point QT Animation in Motion, it was very very slow and the CPU were running at 10 to 15%.
    When i export 8 bit floating point, it is much faster but CPU run at about 20%.
    BUT
    In FCP, when i render 8 bit Motion project and .mov (from 8 bit), or 32 bit Motion projet and .mov (from 32 bit), they all render pretty fast...
    8 bit Motion prj 45% CPU
    .mov from Motion 8 bit 30% CPU
    32 bit Motion prj 45% CPU
    .mov from Motion 32 bit 60% CPU
    Why that much difference ?
    I dont understand why the CPU are running higher with a .mov (QT Animation) that has been created in 32 bit floating point ?
    I thought that once it has been created (self contained) it did not matter...
    thanks

    32 bit floating point refers to how it will be rendered. Has nothing to do with the format itself. You're OK ... just edit.
    32-bit floating point allows audio calculations, such as fader levels and effects processing, to be
    performed at very high resolution with a minimum of error, which preserves the quality of your digital audio.
    Jerry

  • 16 bit integer vs 32 bit floating point

    What is the difference between these two settings?
    My question stems from the problem I have importing files from different networked servers. I put FCP files (NTSC DV - self contained movies) into the server with 16 bit settings, but when I pull the same file off the server and import it into my FCP, this setting is set to 32 bit floating point, forcing me to have to render the audio.
    This format difference causes stuttering during playback in the viewer, and is an inconvenience when dealing with tight deadlines (something that needs to be done in 5 minutes).
    Any thoughts would be helpful.

    It's not quite that simple.
    32 bit floating point numbers have essentially an 8 bit exponent and 24 bit mantissa.  You could imagine that the exponent isn't particularly significant in values that generally range from 0.0 to 1.0, so you have 24 bits of precision (color information) essentially.
    At 16-bit float, I'm throwing out half the color information, but I'd still have vastly more color information than 16-bit integer?
    Not really.  But it's not a trivial comparison.
    I don't know the layout of the 24 bit format you mentioned, but a 16 bit half-float value has 11 bits of precision.  Photoshop's 16 bits/color mode has 15 bits of precision.
    The way integers are manipulated vs. floating point differs during image editing, with consistent retention of precision being a plus of the floating point format when manipulating colors of any brightness.  Essentially this means very little chance of introducing posterization from extreme operations in the workflow.  If your images are substantially dark, you might actually have more precision in a half-float, and if your images are light you might have more precision in 16 bits/channel integers.
    I'd be concerned over what is meant by "lossy" compression.  Can you see the compression artifacts?
    -Noel

  • 32-bit floating point HDR support for LR4.1 RC2

    Folks,
    In the event you haven't discovered it, LR4.1 RC2 has added support for importing and adjusting a 32-bit floating point TIFF file. Which means LR4 can tonemap an HDR image. To use this, you'll need to use Photoshop's HDR Pro to put together the multiple exposed images, then set the options to make a 32-bit HDR image in HDR Pro and save that as a TIFF file. Then import the TIFF into LR4.1 RC2 for toning...note: nothing you do to the HRD in HDR Pro will impact the 32-bit FP TIFF. If you do some initial adjustments of the original raw files in LR 4.1, I'm pretty sure most of the toning in LR4.1 on the raw files is ignored. But white balance works (haven't tested spot healing and lens corrections yet).
    Note, to do the HDR process, the raw images will end up being demosaiced and the saved HDR TIFF will be a linear ProPhoto RGB image.
    Try it...the ability to use LR (and eventually ACR–it's not hooked up in the ACR 7 beta yet) to toning HDR images is actually pretty impressive. Also note, that you don't really need to feed HDR Pro a ton of multiple exposures...2 (normal and under to preserve highlight detail) or 3-5 depending on the scene contrast range is all you need. More isn't really better (unless you really need to shoot a very high dynamic range scene...

    I will try that in the morning, but when checking the properties today it did say 16-bit for each clip. I have gotten around this by taking each clip offline, opening it in STP, applying the effects, doing a save as and then reconnecting each clip with the new file. It seems to work with no sync issues. Just doesn't work as easily as it should. Ideally I should be able to send each section to an STP script and allow it to do its work and have it work.
    I sure hope to find out exactly what is going on.
    K

  • 128-bit floating point numbers on new AMD quad-core Barcelona?

    There's quite a lot of buzz over at Slashdot about the new AMD quad core chips, announced yesterday:
    http://hardware.slashdot.org/article.pl?sid=07/02/10/0554208
    Much of the excitement is over the "new vector math unit referred to as SSE128", which is integrated into each [?!?] core; Tom Yager, of Infoworld, talks about it here:
    Quad-core Opteron? Nope. Barcelona is the completely redesigned x86, and it’s brilliant
    Now here's my question - does anyone know what the inputs and the outputs of this coprocessor look like? Can it perform arithmetic [or, God forbid, trigonometric] operations [in hardware] on 128-bit quad precision floats? And, if so, will LabVIEW be adding support for it? [Compare here versus here.]
    I found a little bit of marketing-speak blather at AMD about "SSE 128" in this old PDF Powerpoint-ish presentation, from June of 2006:
    http://www.amd.com/us-en/assets/content_type/DownloadableAssets/PhilHesterAMDAnalystDayV2.pdf
    WARNING: PDF DOCUMENT
    Page 13: "Dual 128-bit SSE dataflow, Dual 128-bit loads per cycle"
    Page 14: "128-bit SSE and 128-bit Loads, 128b FADD, 128 bit FMUL, 128b SSE, 128b SSE"
    etc etc etc
    While it's largely just gibberish to me, "FADD" looks like what might be a "floating point adder", and "FMUL" could be a "floating point multiplier", and God forbid that the two "SSE" units might be capable of computing some 128-bit cosines. But I don't know whether that old paper is even applicable to the chip that was released yesterday, and I'm just guessing as to what these things might mean anyway.
    Other than that, though, AMD's main website is strangely quiet about the Barcelona announcement. [Memo to AMD marketing - if you've just released the greatest thing since sliced bread, then you need to publicize the fact that you've just released the greatest thing since sliced bread...]

    I posted a query over at the AMD forums, and here's what I was told.
    I had hoped that e.g. "128b FADD" would be able to do something like the following:
    /* "quad" is a hypothetical 128-bit quad precision  */
    /* floating point number, similar to "long double"  */
    /* in recent versions of C++:                       */
    quad x, y, z;
    x = 1.000000000000000000000000000001;
    y = 1.000000000000000000000000000001;
    /* the hope was that "128b FADD" could perform the  */
    /* following 128-bit addition in hardware:          */
    z = x + y;
    However, the answer I'm getting is that "128b FADD" is just a set of two 64-bit adders running in parallel, which are capable of adding two vectors of 64-bit doubles more or less simultaneously:
    double x[2], y[2], z[2];
    x[0] = 1.000000000000000000000000000001;
    y[0] = 1.000000000000000000000000000001;
    x[1] = 2.000000000000000000000000000222;
    y[1] = 2.000000000000000000000000000222;
    /* Apparently the coordinates of the two "vectors" x & y       */
    /* can be sent to "128b FADD" in parallel, and the following   */
    /* two summations can be computed more or less simultaneously: */
    z[0] = x[0] + y[0];
    z[1] = x[1] + y[1];
    Thus e.g. "128b FADD", working in concert with "128b FMUL", will be able to [more or less] halve the amount of time it takes to compute a dot product of vectors whose coordinates are 64-bit doubles.
    So this "128-bit" circuitry is great if you're doing lots of linear algebra with 64-bit doubles, but it doesn't appear to offer anything in the way of greater precision for people who are interested in precision-sensitive calculations.
    By the way, if you're at all interested in questions of precision sensitivity & round-off error, I'd highly recommend Prof Kahan's page at Cal-Berzerkeley:
    http://www.cs.berkeley.edu/~wkahan/
    PDF DOCUMENT: How JAVA's Floating-Point Hurts Everyone Everywhere
    http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
    PDF DOCUMENT: Matlab's Loss is Nobody's Gain
    http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf

  • Add mode and 32 bit Floating point

    Hi.
    I have an AE comp containing 10 layers rendered off to EXR from 3DS Max using VRay.  I have a "cold" scene and have composited several hot VRay ObjectSelect render elements and VRayAtmosphere render elements.  Every layer uses "Add" blend mode.  All are OK except one layer (an Atmosphere element containing smoke).  The smoke is very dark (virtually black) when rendered in Max.  It is layered in AE over a glowing hot area.  When I use "Add" blend mode the smoke looks very thin/transparent and very pale.
    Q.  What effects should I use to get the black smoke to appear black (as it would look in the Max render)?  I would like a degree of control over the transparency so that I can let part of the glow behind it still show through.
    Many thanks,
    T  

    USING Thanks Mylenium.
    As "Add" seems to be the only blend mode that "works" in this instance i.e. by placing the information on top of the other layers, can I get the required control over the smoke by using other effects?  I have played around with levels/exposures/gamma etc. but not found anything that worked.  Is there something that will do this?
    Many thanks,
    T

  • Floating Point Representations on SPARC (64-bit architecture)

    Hi Reader,
    I got hold of "Numerical Computation Guide -2005" by Sun while looking for Floating Point representations on 64 bit Architectures. It gives me nice illustrations of Single and Double formats and the solution for endianness with
    two 32-bit words. But it doesn't tell me how it is for 64-bit SPARC or 64-bit x86.
    I might be wrong here, but having all integers and pointers of 64-bit length, do we still need to break the floating point numbers and store them in lower / higher order addresses ??
    or is it as simple as having a Double Format consistent in the bit-pattern across all the architectures (Intel, SPARC, IBMpowerPC, AMD) with 1 + 11 + 52 bit pattern.
    I have tried hard to get hold of a documentation that explains a 64-bit architecture representation of a Floating Point Number. Any suggestion should be very helpful.
    Thanks for reading. Hope you have something useful to write back.
    Regards,
    Regmee

    The representation of floating-point numbers is specified by IEEE standard 754. This standard contains the specifications for single-precision (32-bit), and double-precision (64-bit) floating-point numbers (There is also a quad-precision (128-bit) format as well). OpenSPARC T1 supports both single and double precision numbers, and can support quad-precision numbers through emulation (not in hardware). The fact that this is a 64-bit machine does not affect how the numbers are stored in memory.
    The only thing that affects how the numbers are stored in memory is endianness. SPARC architecture is big-endian, while x86 is little-endian. But a double-precision floating-point numer in a SPARC register looks the same as a double-precision floating-point number in an x86 register.
    formalGuy

  • How can floating point division be faster than integer division?

    Hello,
    I don't know if this is a Java quirk, or if I am doing something wrong. Check out this code:
    public class TestApp
         public static void main(String args[])
              long lngOldTime;
              long lngNewTime;
              long lngTimeDiff;
              int Tmp;
              lngOldTime = System.currentTimeMillis();
              for( int A=1 ; A<=20000 ; A++)
                   for( int B=1 ; B<=20000 ; B++)
                        Tmp = A / B;
              lngNewTime = System.currentTimeMillis();
              lngTimeDiff = lngNewTime - lngOldTime;
              System.out.println(lngTimeDiff);
    }It reports that the division operations took 18,116 milliseconds.
    Now check out this code (integers replaced with doubles):
    public class TestApp
         public static void main(String args[])
              long lngOldTime;
              long lngNewTime;
              long lngTimeDiff;
              double Tmp;
              lngOldTime = System.currentTimeMillis();
              for( double A=1 ; A<=20000 ; A++)
                   for( double B=1 ; B<=20000 ; B++)
                        Tmp = A / B;
              lngNewTime = System.currentTimeMillis();
              lngTimeDiff = lngNewTime - lngOldTime;
              System.out.println(lngTimeDiff);
    }It runs in 11,276 milliseconds.
    How is it that the second code snippet could be so much faster than the first? I am using jdk1.4.2_04
    Thanks in advance!

    I'm afraid you missed several key points. I only used
    Longs for measuring the time (System.currentTimeMillis
    returns a long). Sorry you are correct I did miss that.
    However, even if I had, double is
    also a 64-bit data type - so technically that would
    have been a more fair test. The fact that 64-bit
    floating point divisions are faster than 32-bit
    integer divisions is what confuses me.
    Oh, just in case you're interested, using float's in
    that same snippet takes only 7,471 milliseconds to
    execute!Then the other explaination is that the Hotspot compiler is optimizing the floating point code to use the cpu floating point instructions but it is not optimizing the integer divide in the same way.

  • 32 Bit Float

    Is Logic 7 Pro 32 Bit Float? Or can it import 32 Bit Float Files?
    If Logic is 32 Bit Float Is there a setting in Logic Pro 7 to record in 32 Bit Float?
    If So How? Thanks.
    iNtel Dual Core Mac 17"   Mac OS X (10.4.8)   2 Gigs Of Ram

    Logic's audio engine is 32 bit floating points and freeze files are 32 bit floating points. However, Logic does not record or support 32 bit floating point files.

  • Invalid Floating Point Error

    I have one Captivate 3 project published as a Stand Alone
    project with Flash 8 selected. There are 36 slides, no audio, no
    eLearning, SWF size and quality are high.
    One person who runs this gets an "Invalid Floating Point"
    error when he tries to run it the first time. He is running Windows
    XP SP2, Firefox 3.0.4. and Flash Player 10.0.12.36. Other Captivate
    projects I've created run fine for him. This one sometimes runs
    after the first Error message.
    Any thoughts on the cause and fix?
    Thanks,
    Janet

    iMediaTouch probably doesn't support Floating Point formats - it certainly doesn't mention them in the advertising. Try saving your files as 24-bit PCMs, and they should import fine.

Maybe you are looking for

  • Billing the Shipment Costs to the Customer

    Hi, In my scenario, I had created shipment document & shipment cost document, now what ever be the cost in that I want to charge to customer in addition of material price. I had created condition type say KF00 in shipment & made it statistical. same

  • How to use s60 sdk?

    Hello I m using WTK..but my application doesnt work via OTA..There is some problem in loading image..it throws ioexception..i want to switch to s60 sdk.. can someone tell me how to run application in that after installing.. Edited by: Astha on Nov 23

  • Talking Dictionary and 3rd party Chinese IME confl...

    I once had a Nokia (non smart) phone with a rather capable Chinese input system (alas I misplaced the phone). And now my e52 won't let me enter several characters at a time, and the few consecutive pairs that it does let me enter are very restrictive

  • Updating ECMCC table

    Hi, when i am trying to update glaccount field in ECMCC table  based on glaccount of ECMCT table it is throwing error. is there any specific method to update ECMCC table. code in the program is : loop at it_ecmct.   update ecmct from it_ecmct.   if s

  • After connecting to Internet, clicking Mail icon, why won't Mail open?

    After connecting to internet,clicking Mail Icon, why won't Mail open?