IEEE Floating point format converstion to ForteDouble

Question:
Given that I have 4 bytes of binary data which represents a number in
IEEE floating point format,
and I wish to convert it to a Forte DoubleData, will the following code
give me the correct answer
in Value?
(Assume that file is correctly set up, etc...)
Value : DoubleData = new;
FPoint : point to float;
F : float;
LineText : BinaryData = new;
File.ReadBinary(LineText,4);
Fpoint = (pointer to Float)(LineText.Value);
F = *Fpoint;
Value.SetValue(F);
Thanks
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

Mark,
you might try testing whether forte floats are IEEE in the following
way using the following:
pflt : pointer to float = (pointer to float) (res.value);
flt = *pFlt;
however, I believe you will have to wrapper a C function to do this.
The C function takes a void * first argument and has a float
void ConvIEEE(void * buffer, float * return)
return = (float) (buffer);
or
void ConvIEEE(void buffer, float return)
ieeefloat ie;
ie = (ieeefloat) (*buffer);
*return = IEEELibraryConvertToFloat(ie);
depending upon whether C floats are IEEE or not on your
platform/compiler. I think you'll have to investigate this yourself,
or try the first approach and see if it works.
Good luck!
assuming, of course, that your C compiler's float is also IEEE format.
Your forte wrapper would look like
class floatWrapper inherits from framework.object
has public method ConvIEEE(input buffer : pointer,
output return : float)
end class;
with your binarydata you would
res : binarydata = (get from somewhere)
flt : float;
fw : FloatWrapper = new;
fw.ConvIEEE(res.value,flt);
Mark Sundsten wrote:
>
Question:
Given that I have 4 bytes of binary data which represents a number in
IEEE floating point format,
and I wish to convert it to a Forte DoubleData, will the following code
give me the correct answer
in Value?
(Assume that file is correctly set up, etc...)
Value : DoubleData = new;
FPoint : point to float;
F : float;
LineText : BinaryData = new;
File.ReadBinary(LineText,4);
Fpoint = (pointer to Float)(LineText.Value);
F = *Fpoint;
Value.SetValue(F);
Thanks
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>--
John Jamison [email protected]
Vice President and Chief Technology Officer
Sage IT Partners, Inc.
Voice: 415 392-7243 x 306
Fax: 415 391-3899
Internet Enabled Business Change
http://www.sageit.com
-----------------------------------------------------

Similar Messages

  • Floating point formats: Java/C/C++, PPC and Intel platforms

    Hi everyone
    Where can I find out about the various bit formats used for 32 bit floating numbers in Java and C/C++ for both Mac hardware platforms?
    I'm developing a Java audio application which needs to convert vast quantities of variable width integer audio samples to canonical float audio format. I've discovered that a floating point divide by the maximum integer value gives the correct answer but takes too much processor time, so I'm trying out bit-twiddling in C via JNI to carve out my own floating point bit patterns. This is very fast, however, I need to take into account the various float formats used on the different platforms so my app can be universal. Can anyone point me to the information?
    Thanks in advance.
    Bob

    I am not sure that Rosetta floating point works the same as PPC floating point. I was using RealBasic (a PPC basic compiler) and moved one of the my compiled applications to a MacBook Pro and floating point comparisons that had been exact on the PPC stopped working under Rosetta. I changed the code to do an approximate comparison (i.e. abs(a -b) < tolerance) and this fixed things.
    I reported the problem to the RealBasic people and thought nothing more of it until I fired up Adobe's InDesign and not being used to working with picas, changed the units of measurement to inches. The default letter paper size was suddenly 8.5000500050005 inches instead of the more usual 8.5! This was not a big problem, but it appears that all of InDesign's page math is running into some kind of rounding errors.
    The floating point format is almost certainly IEEE, and I cannot imagine Rosetta doing anything other than using native hardware Intel floating point. On the other hand, there is a subtle difference in behavior.
    I am posting this here as a follow up, but I am also going to post this as a proper question in the forum. If you have to delete one or the other of these duplicate posts, please zap the reply, not the question.

  • Floating point format conversions

    Hi All,
    I have a binary file that has 8 byte floats in it written in VAX D floating point format. It also has 4 byte integers in it. I have to read this file on a Sun Sparc. To get the correct value of the integers I just swap the bytes around and write out the int value, so bytes 0123 get rearranged to be 3210 and then I use that as my int. I tried that with the doubles, reorder bytes 01234567 to 76543210 and then write out the double value but I don't get the value that was stored.
    I read 8 bytes from the file, I'm supposed to get 512.0 but I get garbage.
    The hex dump of the file shows: 0045 0000 0000 0000
    I can't come up with a way to turn that into 512.0
    Another one is 360448.0 with hex dump: b049 0000 0000 0000
    Can anyone show me how to manipulate these bits/bytes to get the correct values?
    Thanks

    Hi Legosa,
    Thanks for looking for a solution for me. The linkhttp://nicmos2.as.arizona.edu/thompson/kfocas/vax2sun.c
    has a C implementation of exactly what I need and has saved me a lot of time and work. Translation to Java will have to use bit shifting I think, the C unions make for a nice implementation.
    I also have to read PC and Cray generated files on my Sun.
    I have found that, like the Vax, Intel x86 including pentium are all little endian and use IEEE so I'm guessing that I just have to do the byte swapping to translate from PC to Sun.
    Crays are Big endian so I don't need byte swapping but I do need to do some manipulation of the exponent and the mantissa.
    Do you know where I might find code that others have done for PC to Sun and Cray to Sun conversions of integers, floats and doubles?
    BTW, for those who may read this later, the solution in vax2sun.c isn't quite right, the author forgot to use the least significant 32 bits and lost 3 bits in the middle but it is very close. To make it closer you have to change the vax2sun.c code a little.
    Replace mantissa and lomant with
    In union ieeebuf
    int mantissa1:20
    int mantissa2:3
    int mantissa3:29
    In union vaxbuf
    int mantissa1:20
    int mantissa2:3
    int mantissa3:29
    int lost_bits:3
    In the code replace d.mantissa = v.mantissa/8 with:
    d.mantissa1 = v.mantissa1
    d.mantissa2 = v.mantissa2
    d.mantissa3 = v.mantissa3
    My few tests showed this gave very good results. The lost_bits are lost because ieee needs 3 more bits for its exponent so it doesn't have room for all of the vax mantissa bits. A little bigger range means a little less precision.

  • Export to .exr / .hdr / floating point format from ACR?

    Hi!
    Context is that I'm doing HDRI images for rendering so dynamic range is the most important aspect.
    I searched hard but didn't found a way to export from ACR into a floating point format which is very very very annoying because if you export as 16bit tiff it will obviously clip whatever values you have in your raw file. I can load the raw / dng files directly into Photomatix / Oloneo but then I'm losing any kind of Camera Profile / Distortion profile embedded with them.
    Oh and I don't want to use Photoshop to Merge the HDRI files because I want to have control over the deghosting and various other alignement related stuff. I'm really searching for a .dng / .raw to .exr conversion.
    Any ideas?
    KS

    I would benefit from first class raw demosaicing, noise reduction, undistortion, optional color profiling, etc... before sending that to merging in whatever software that is not the specialty to do what ACR does perfectly
    Getting linear data out would be the step after all the ones I described and I wouldn't change any exposure, recovery, black level, tone curve, etc ... related options so that the HDRI merge is predictable. That's where the problem resides.
    Lets say you have a photo with a bright sky, you load it in ACR, with everything to zero. Your sky is white and burnt ( But you actually have data from the raw file because you have checked it by changing the exposure compensation to -5 ) If you export the image as is without the exposure compensation, the sky data is clipped once casted to integer.
    If you export it with the exposure compensation, the HDRI merge will be very very bad because the software will have an image with the data where the exposure has been shifted but still exhibit the exif data from the original file.
    Of course you can merge with the tiff without the compensation and rely on a faster exposure to fill the missing data, it works but when you hit your latest fast exposure and you see that you would be able to pump even more dynamic range from it but that ACR clipped the data it's very frustrating.
    I now hope it's crystal clear and why I would like to get a linear exr output I can also upload a raw file somewhere for people having trouble understanding the issue.
    In the best case scenario Adobe will provide a solution, but if it happens it will not be before a while, in the worst case there won't be any solutions at all. With that in mind I wrote a Python script that generates for me the tail exposures for my bracketing sequences and modify accordingly the exif data.
    My workflow is the following for now:
    - Dump raw images from camera.
    - Apply neutral preset, noise reduction on all the raw files.
    - Praise Adobe for ACR.
    - Launch the Python script that:
         - Generates a subfolder where all the raw and xmp files are copied.
         - Images are renamed reordered by their exposure time.
         - The tail exposure is duplicated 3 times.
         - The xmp files are updated with exposure compensation ( -1, -3, -5 ). Now I get the dynamic range.
         - The raw files exif data is altered to reflect the exposure compensation.
    - Load the new raw files from the subfolder and convert them to tif.
    - Rant against Adobe
    - Merge to HDRI with my proper dynamic range.
    KS

  • On-line IEEE floating-point addition

    Hi,
    Could somebody please recommend the login for the on-line IEEE floating-point number addition.
    On-line: start adding twp floating-point number from the most significant posion.
    Thank you!

    Thank you for your replay!
    I had a misspelled word in my question. it should have been "logic" instead of "login"
    I am looking for some java code that I can use as a starting point to start developing on-line floating poing addition unit. On-line means that the addition is done from the most significand bit (left-most) and goes towards least significand bits in binary floating point.
    for example:
    there are 64 bit in a FP number
    |1 bit | 11 bits | 52 bits | + |1 bit | 11 bits | 52 bits |
    I hope, I make sence
    Thank you!

  • IEEE floating point;

    This isn't really a Java question, but...
    Continuing fractions -- has anyone ever experimented with using implementing arithmetic operations on computers with continuing fractions? It would seem to provide a more way to represent most rationals quite compactly, and to represent irrationals fairly accurately. It wouldn't be as compact as IEEE floats, of course, but IEEE floats are extremely bad at representing many extremely simple rationals, like 1/3.
    I'm not much of a mathematician, nor much of an engineer, but would it be conceivable to implement an "FPU" using continuing fractions rather than mantissa-exponent floats?

    I am very rusty on this, and too lazy to walk across the room and look it up in my math library, but the cute algorithim that I remember for finding a rational approximation to a float goes like this: (by the way, cute means that it is easy to remember NOT that it is the fastest thing)
    First we define "bonehead" rational addition. This is not real addition it is a made up function that resembles how a bonehead would add two fractions. it is easy
    a/b + c/d = (a + b)/(c+d)
    so for example The bonehead sum of 1/3 and 1/2 is 2/5
    now observe the interesting fact that bonehead sum creates a fraction that lands between the two that you started with. This is always the case and not even too hard to prove. (Exercise for the reader)
    We can use that little fact to create a series of rationals that converge to a float. Think of it like this. Choose two rationals R1 and R2 that bound the float, F i.e.
    R1 < F < R2
    Do the bonehead sum to get R3. Well R3 must be in the interval, either greate than F or less than F, (or by some miracle equal to F in which case F was really a rational). Just so I can write it out, lets pretend that R3 ended up greater than F. We now have
    R1 < F < R3 < R2
    We can now throw out R2 because we have tighter bounds on F. One guess what we do next.
    That's right, we iterate. Do the bonehead sum of the two rational bounds, check where the new rational is in relation to the float, and replace one of the previous bounds with the new rational. This gets us a series of rationals that are converging to the Float. Are they optimal? Not quite but almost.
    You see, the top rational could have been very close to the float, and the bottom one way far away, and you do a whole lot of replacing the lower bound over and over until the lower bound is finally better than the upper bound, and then you will start moving the upper bound down. That point when the lower bound stops moving and the upper one starts to move is a good lower bound and symmetrically, when the upper bound stops moving and the lower one starts up again is a good upper bound, each one better than the previous.
    Those are good approximations.
    Now let me point out one other thing. You need to start out with a couple rationals that bound your float, how do you find those? How about starting with 0/1 as the lower bound, (zero is pretty much lower than any positive fraction) and for the upper number start with 1/0.
    eeekk! what's that? division by zero? nah, were talking rational numbers here, that the pair of integers 1 and 0, and yes, it represents infinity which is pretty much bigger than any positive fraction you could care to name. I mean, after all, if you're gonna do bonehead addition, you should have no problem with bonehead fractions too. I told ya this was cute!!
    Now, if you start with those numbers (0,1) and (1,0) and you follow that process SHO 'NUFF you are banging out the same optimal convergents that you get if were using continued fractions.
    So go nuts, whip out yer compiler and have at it!
    Yes the beauty of continued fractions lets you feel that you get something for nothing. i.e. using only 32 bits of accuracy for the denominator gives you in general about 64 bits of accuracy in the fraction itself. This is a false win. You use a 32 bit numerator and a 32 bit denominator together to get yourself the equivalent of 64 bits of accuracy. Its a wash. Furthermore, you do not get exponential notation, but hey, you can't have everything when yer having fun.
    Is this efficient? Not particularly, consider how long it takes to converge to 1/10000. Lets see first it adds 0/1 to 1/0 to get 1/1, then it adds 0/1 to 1/1 to get 1/2, then 1/3 ... Yep, 10000 steps counting by one all the way. It is not partucilarly efficient by Computer Algorithm standards, but as I pointed out at the beginning, the beauty of this algorithm is that it is so cute that I can remember it nearly 20 years after I first learned it, It is so simple that I can code it up with virtually no chance of making a mistake and best of all, that means I do not have to get out of my chair, walk across the room, find the right book, look up the right way to do continued fractions, etc.
    In terms of my personal efficiency, This baby kicks butt! Furthermore, do I really care if the computer has to grind through a few thousand or even a few million extra operations any more. Not really.
    So go ahead and slam in pi and watch this baby grind out 22/7 and the good old 355/113
    Enjoy!!

  • Reading IEEE Floating-Point 32bit number

    I have a program written in C++ that works with single precision float (IEEE) in Windows. When I try to read data from that program in LabView they seem different. Labview also use IEEE single float representation. The numbers in the files are sine wave with amplitude 1 and 30 samples/cycle.
    The first numbers are
    0
    0.207912
    0.406737 etc.
    Reading in LV the second number is 3E 54 E6 E2 where the left 3 bytes are mantissa and the right exponent and sign. Reading in LV the second number from the program1 I have CE E6 54 3E which is the mantissa on the right as little endian, but the exponent and sign are CE?! Any hint how to read it correctly in LV?
    Attachments:
    Prgm1.bin ‏1 KB
    SGLLvtest.bin ‏1 KB

    Hi,
    This works for the second file.
    Regards,
    Wiebe.
    "markstab" wrote in message
    news:[email protected]..
    > I have a program written in C++ that works with single precision float
    > (IEEE) in Windows. When I try to read data from that program in
    > LabView they seem different. Labview also use IEEE single float
    > representation. The numbers in the files are sine wave with amplitude
    > 1 and 30 samples/cycle.
    > The first numbers are
    > 0
    > 0.207912
    > 0.406737 etc.
    > Reading in LV the second number is 3E 54 E6 E2 where the left 3 bytes
    > are mantissa and the right exponent and sign. Reading in LV the second
    > number from the program1 I have CE E6 54 3E which is the mantissa on
    > the right as little endian
    , but the exponent and sign are CE?! Any
    > hint how to read it correctly in LV?
    [Attachment Decode.vi, see below]
    Attachments:
    Decode.vi ‏21 KB

  • Float point format function --- "Beginner"

    Hii every body
    1-I want to format any float number to be with only two decimal digits
    examble :
    95.3213 converted to be 95.32
    is there any function that can do this ???
    2- I want to remove any spaces from any given string
    examble:
    mystring = "Hi my friend " coverted to be mystring = "Himyfriend"
    thank you ...

    1 Check out the API for java.text.DecimalFormat
    double d = 95.3213;
    NumberFormat nf = NumberFormat.getInstance();
    nf.setMaximumFractionDigits(2);
    System.out.println(nf.format(d);2 If you have java1.4 you can use the replaceAll method in string:
    String myString  = "Hi my friend ";
    myString = myString.replaceAll("\\s", "");The regular expression "\s" matches all whitespace - spaces, tabs etc etc

  • Floating point format in report

    I am using TestStand 3.1 and LabVIEW 7.1. I set the default report format for HTML report to _whatever_, %#5.13f, append trailing zeros etc, and this format is present for report entries such as limits. It is not present for the actual measured value, and this seems subject to some display format rounding. For instance, my limit test will be GELE and the lower limit will be 3.0000000000000. The measurement will be 2.956785434 and the test will fail, but in the report the measurement is formatted as 3.0. What has gone wrong?

    Hi Odd_Modem,
    Sorry for the crazy email. I accidentally replied when I was not ready.  
    Anyway, the third option was the Tree View. Once inside the tree, browse to Main >> Numeric Limit Test >> Result >> Numeric.  Right-click on the numeric field (in the right window) and select properties. Then select Numeric Format and change the significant digits.  This should fix your issue since this property is the highest precedence. The order of precedence is (Numeric in Tree View -> Limits -> Report options).
    Hope this helps!
    Best Regards,
    Jonathan N.
    National Instruments

  • F suffix for floating point.

    Okay, I'm a proficient c++ programmer and have been learning Java for only a few weeks now.
    I have a question about the f suffix for floating point varibles such as float f = 3.14f;
    The f suffix casts this as float right? which is the same as float f = (float) 3.14; Correct?
    Why do we have to add the f suffix in the first place? Doesn't the compiler know that we want a float and not a double? (single-precision 32-bit instead of double precision 64 bit) I really do not understand the concept here or why they need the f suffix.
    Can someone explain?

    ThePHPGuy wrote:
    The f suffix denotes that the literal is of a floating-point type.Yes. The d suffix does the same.
    Java has two different types of floating-point numbers.Right.
    The type double is the default type.Right.
    The float type can have a double and a float literal. Is this true or false?No. At least not in any way I understand it.
    I think you're confusing two things:
    "floating point number" is any number in the IEEE floating point format.
    "float" is a datatype holding a 32bit floating point number.
    "double" is a datatype holding a 64bit floating point number.
    floating point number literals can be either double literals (without suffix or if the "d" suffix is used) or float literals (when the "f" suffix is used).

  • Convert Floating Point Decimal to Hex

    In my application I make some calculations using floating point format DBL,and need to write these values to a file in IEEE 754 Floating Point Hex format. Is there any way to do this using LabVIEW?

    Mike,
    Good news. LabVIEW has a function that does exactly what you want. It is well hidden though...
    In the Advanced/Data manipulation palette there is a function called Flatten to String. If you feed this funtion with your DBL precision digital value you get the IEEE-754 hexadecimal floating point representation (64 bit) at the data string terminal (as a text string).
    I attached a simple example that shows how it works.
    Hope this helps. /Mikael Garcia
    Attachments:
    ieee754converter.vi ‏10 KB

  • Invalid Floating Point Error

    I have one Captivate 3 project published as a Stand Alone
    project with Flash 8 selected. There are 36 slides, no audio, no
    eLearning, SWF size and quality are high.
    One person who runs this gets an "Invalid Floating Point"
    error when he tries to run it the first time. He is running Windows
    XP SP2, Firefox 3.0.4. and Flash Player 10.0.12.36. Other Captivate
    projects I've created run fine for him. This one sometimes runs
    after the first Error message.
    Any thoughts on the cause and fix?
    Thanks,
    Janet

    iMediaTouch probably doesn't support Floating Point formats - it certainly doesn't mention them in the advertising. Try saving your files as 24-bit PCMs, and they should import fine.

  • Converting binary to floating point?

    ok, i am programming a little java program at the consulter
    the main idea of the program is to convert a decimal number to IEEE (floating point)
    i am almost done except the exponent part
    i dont understand the logic behind it
    so, for example:
    -6.625 = 110.101 (binary)
    after normalization it will be -1.10101 * 2^2
    so the IEEE will be
    1 10000001 10101000000000000000000
    i understand the sign part and the fraction part
    but i have no idea how the exponent part came like this
    the book say that 129-127=+2 so the exponent is 10000001
    da, where it came from ???
    i will appreciate it if someone explain this part for me step by step
    thank you,
    Edited by: abdoh2010 on Jan 26, 2008 2:37 AM

    got it
    thank you for viewing my question

  • 16 bit integer vs 32 bit floating point

    What is the difference between these two settings?
    My question stems from the problem I have importing files from different networked servers. I put FCP files (NTSC DV - self contained movies) into the server with 16 bit settings, but when I pull the same file off the server and import it into my FCP, this setting is set to 32 bit floating point, forcing me to have to render the audio.
    This format difference causes stuttering during playback in the viewer, and is an inconvenience when dealing with tight deadlines (something that needs to be done in 5 minutes).
    Any thoughts would be helpful.

    It's not quite that simple.
    32 bit floating point numbers have essentially an 8 bit exponent and 24 bit mantissa.  You could imagine that the exponent isn't particularly significant in values that generally range from 0.0 to 1.0, so you have 24 bits of precision (color information) essentially.
    At 16-bit float, I'm throwing out half the color information, but I'd still have vastly more color information than 16-bit integer?
    Not really.  But it's not a trivial comparison.
    I don't know the layout of the 24 bit format you mentioned, but a 16 bit half-float value has 11 bits of precision.  Photoshop's 16 bits/color mode has 15 bits of precision.
    The way integers are manipulated vs. floating point differs during image editing, with consistent retention of precision being a plus of the floating point format when manipulating colors of any brightness.  Essentially this means very little chance of introducing posterization from extreme operations in the workflow.  If your images are substantially dark, you might actually have more precision in a half-float, and if your images are light you might have more precision in 16 bits/channel integers.
    I'd be concerned over what is meant by "lossy" compression.  Can you see the compression artifacts?
    -Noel

  • SignalExpress 2010 - Chart time axis format will not change to floating point or anything else

    I just downloaded and installed the new and shiny SignalExpress 2010 to replace the old version. And immediately run into major problems:
    The x-axis time format refuses to change to floating point or scientific even if I choose them from chart properties. The format is always absolute, e.g. 12:23:54.743. This makes the usage of the chart and the whole application impossible! Is this a known issue and is it going to be fixed soon? And where can I get the old version of the SignalExpress (2009) so I can install it again?

    Here is a screenshot of the problem:

Maybe you are looking for

  • Error in Including the Custom function in Custom schema ...

    Hi Experts ,                I developed a time custom Function through PE04 ,  Z_5RT and included in Custom Schema , ZM04, while executing in PT60 using the schema ZM04 an error coming . Any configuration required to include custom function in a sche

  • Compaq F730US Memory

    Hi I was just wondering if someone can confirm a memory chip to upgrade to 2GB -   PC2-6400 800MHz 2GB DDR2 SODIMM Dual Inline Laptop Memory Module for Notebooks - 1x 2048MB, 200-pin is what I am looking at for it.  It seems what information I can fi

  • Slide to power off won't work

    Trying to do a recovery on an iPod touch but when i go to slide the power bar off it goes back to the start every time.  The slide gets all the way to the end and when its at about 80% it stops and goes back to the beginning.  It is locked in disable

  • TS4044 HOW DO  I GET A SCREEN SHOT, WITH MACBOOK PRO

    can you tell me how i get a screenshot with mac book pro?

  • Alter database open resetlogs upgrade ;         throwing error

    Recently i have cloned a database from 11.2.0.2 to 11.2.0.3 on a new server.... I got the error as fowwos, contents of Memory Script: Alter clone database open resetlogs; executing Memory Script RMAN-00571: ===========================================