Floating point to binary conversion

Hi
I need to convert a floating point decimal number to bits.
Eg. 0.000532 to be converted to binary(bits).
How do I do this?

Now if I convert that decimal number to bits(in
the usual method of dividing by 2),will that be the
exact binary representation of the floating point
decimal number?You have the same bit pattern in both cases. In one it's held in a double and will be interpreted as a floating point number according to the IEEE 754 representation. In the other it's held in a long and will be interpreted according to the two's complement representation. But it's the same bitpattern.
Note that Long has a toString method which allows you to convert the long to a String. The radix in your case is 2 for binary.

Similar Messages

  • Floating point to Date conversion

    Do any Function module is available for converting the floating point value into Date.

    hi,
      do this way  ...
    DATA : p_float TYPE f,
               p_date LIKE cawn-atwrt,
               p_flt LIKE cawn-atflv,
               p_dt LIKE sy-datum.
    p_float = '2.006123100000000E+07'.
    p_flt = p_float.
    CALL FUNCTION 'CTCV_CONVERT_FLOAT_TO_DATE'
      EXPORTING
        float = p_flt
      IMPORTING
        date  = p_date.
    WRITE : p_date TO p_dt DD/MM/YYYY.
    WRITE : p_dt.

  • Floating point to time conversion

    Hi all,
      I need to convert floating point value to time.Any help would be gr8

    Hai,
       Define two variables,one as float & another one as time take a & b.After assigning values to a,assign a to b.Then write b.The float value will be converted into time format.
    Regards,
    Padmam.

  • Precision loss - conversions between exact values and floating point values

    Hi!
    I read this in your SQL Reference manual, but I don't quite get it.
    Conversions between exact numeric values (TT_TINYINT, TT_SMALLINT, TT_INTEGER, TT_BIGINT, NUMBER) and floating-point values (BINARY_FLOAT, BINARY_DOUBLE) can be inexact because the exact numeric values use decimal precision whereas the floating-point numbers use binary precision.
    Could you please give two examples: one where a TT_TINYINT is converted to a BINARY_DOUBLE and one when a TT_BIGINT is converted into a DOUBLE, both cases give examples on lost precision? This would be very helpful.
    Thanks!
    Sune

    chokpa wrote:
    Public Example (float... values){}
    new Example (1, 1e2, 3.0, 4.754);It accepts it if I just use 1,2,3,4 as the values being passed in, but doesn't like it if I use actual float values.Those are double literals, try
    new Example (1f, 1e2f, 3.0f, 4.754f);

  • Floating point format conversions

    Hi All,
    I have a binary file that has 8 byte floats in it written in VAX D floating point format. It also has 4 byte integers in it. I have to read this file on a Sun Sparc. To get the correct value of the integers I just swap the bytes around and write out the int value, so bytes 0123 get rearranged to be 3210 and then I use that as my int. I tried that with the doubles, reorder bytes 01234567 to 76543210 and then write out the double value but I don't get the value that was stored.
    I read 8 bytes from the file, I'm supposed to get 512.0 but I get garbage.
    The hex dump of the file shows: 0045 0000 0000 0000
    I can't come up with a way to turn that into 512.0
    Another one is 360448.0 with hex dump: b049 0000 0000 0000
    Can anyone show me how to manipulate these bits/bytes to get the correct values?
    Thanks

    Hi Legosa,
    Thanks for looking for a solution for me. The linkhttp://nicmos2.as.arizona.edu/thompson/kfocas/vax2sun.c
    has a C implementation of exactly what I need and has saved me a lot of time and work. Translation to Java will have to use bit shifting I think, the C unions make for a nice implementation.
    I also have to read PC and Cray generated files on my Sun.
    I have found that, like the Vax, Intel x86 including pentium are all little endian and use IEEE so I'm guessing that I just have to do the byte swapping to translate from PC to Sun.
    Crays are Big endian so I don't need byte swapping but I do need to do some manipulation of the exponent and the mantissa.
    Do you know where I might find code that others have done for PC to Sun and Cray to Sun conversions of integers, floats and doubles?
    BTW, for those who may read this later, the solution in vax2sun.c isn't quite right, the author forgot to use the least significant 32 bits and lost 3 bits in the middle but it is very close. To make it closer you have to change the vax2sun.c code a little.
    Replace mantissa and lomant with
    In union ieeebuf
    int mantissa1:20
    int mantissa2:3
    int mantissa3:29
    In union vaxbuf
    int mantissa1:20
    int mantissa2:3
    int mantissa3:29
    int lost_bits:3
    In the code replace d.mantissa = v.mantissa/8 with:
    d.mantissa1 = v.mantissa1
    d.mantissa2 = v.mantissa2
    d.mantissa3 = v.mantissa3
    My few tests showed this gave very good results. The lost_bits are lost because ieee needs 3 more bits for its exponent so it doesn't have room for all of the vax mantissa bits. A little bigger range means a little less precision.

  • Converting binary to floating point?

    ok, i am programming a little java program at the consulter
    the main idea of the program is to convert a decimal number to IEEE (floating point)
    i am almost done except the exponent part
    i dont understand the logic behind it
    so, for example:
    -6.625 = 110.101 (binary)
    after normalization it will be -1.10101 * 2^2
    so the IEEE will be
    1 10000001 10101000000000000000000
    i understand the sign part and the fraction part
    but i have no idea how the exponent part came like this
    the book say that 129-127=+2 so the exponent is 10000001
    da, where it came from ???
    i will appreciate it if someone explain this part for me step by step
    thank you,
    Edited by: abdoh2010 on Jan 26, 2008 2:37 AM

    got it
    thank you for viewing my question

  • How to read binary floating point values from TCP/IP

    I am attempting to use a LabView application to read an array of binary single precision floating point numbers transferred through a TCP/IP connection from a Windows C++ program. The endianization occurs before the values are sent to the Labview application. When I read the values in LabView, some values are interpreted correctly, some are not. For instance, the C program is sending a 6 as one element, and 7 as another. Labview interprets both as 6. The difference between 6 and 7 in binary format is 1 bit (bit #21 if counting from 0 in LabView format). There are 2 other values that show the same error- 459.67 is being sent, 395.67 is read by labview (1 bit difference, exact same bit... 7.5 is being sent, 6.5 is being read by LabView (1 bit difference, exact same bit). LabView reads that bit as 0, when it should be 1.
    This seems very odd to me because most values are being read correctly, including other values with that same bit on. There are values being read correctly that are both before and after the incorrect values in the array, so it's not just an issue with an offset or something in the bit stream. Additionally, when attempting to read an array of values with all bits on, I get a strange pattern of 111110011111100111111001111110.
    We have also verified that the binary representation for the values is the same on both machines, once you account for the byte swapping. What am I missing here? How in the world can some values come across correctly and others incorrectly? Any help would be greatly appreciated! Our fallback is to transfer everything in ASCII, which is going to greatly increase packet sizes.
    Thanks,
    Jason

    Update:
    Problem fixed. I say fixed and not solved because I don't know why it's fixed. I had been storing the TCP/IP read in a string and passing it through a shift register after each read. I would then concatenate it with the TCP/IP read result in the next loop. After each read, it would search the concatenated string for ASCII flags. When it found them, it would strip off the flags and then type cast the rest as single precision floating points.
    I knew I would be getting the same number of bytes each time, so I ditched the ASCII flags and had just the binary values sent. This way, I expect to get all of the values in one TCP/IP read. No values are passed through a shift register to the next loop and there is no concatenation of the string outputs from TCP/IP read.
    I'm not sure if it was the ASCII flags being included or something with the way I was manipulating the string that was causing the binary values to be interpreted incorrectly. Hope this helps someone else.
    Jason

  • Conversion of a floating point type field

    Hi,
    I'm fetching field ATFLV from table AUSP for a particular value of ATINN.ATFLV is a floating point type field.
    Can anyone please guide me as to how to convert this field(ATFLV) from floating point no. to a simple no.?
    Helpful answers will be rewarded.
    Regards,
    Sipra

    hi,
    Do like this,
    float f = 234.33;
    int i = (int) f; // i has value 234.
    reward points if helpful..

  • Floating Point arithmetic conversion

    Hi Everyone,
    Can you tell me how to convert a floating point arithmetic field value to a currency field value.
    thanks,
    chan

    Hi,
    I hope simple move statement should work.
    MOVE l_float TO l_curr.
    Make sure that curr field has enough length.
    Thanks,
    Vinod.

  • Floating point conversation

    Dear all,
        In a particular filed i entered value as 60.800 kg. But when i try to pick the value   i found that the value is stored in floating pont format ie 6.080000000000e+00 . Actually i need it in integer format .Is any function module available to convert floating point to integer.
    Regards
    Mahesh V

    Why use a function module?
    DATA: l_val_int type i,
          l_val_flt type f,
          l_val_pck type p decimals 2.
    l_val_flt = '6.080000000000e+00'.
    l_val_int = l_val_flt.
    l_val_pck = l_val_pck.
    matt

  • How does Java store floating point numbers?

    Hello
    I'm writing a paper about floating point numbers in which I compare an IEEE-754 compatible language [c] with Java. I read that Java can do a conversion decimal->binary->decimal and retain the same value whereas c can't. I found several documents discussing the pros and cons of that but I can't find any information about how it is implemented.
    I hope someone can explain it to me, or post a link to a site explaining it.
    Cheers
    Huttu

    So it is a myth.
    I still ask because I observed a oddity: When I store 1.4 in c and printf( %2.20f\n",a); it I get 1.39999999999999991118. If I do the same in Java with System.out.printf( %2.20f\n",a); I get 1.4. If I multiply the variable with itself I get 1.95999999999999970000:
    double a=1.4;
    a=a*a;
    System.out.printf( %2.20f\n",a);
    {code}
    Does this happen because of the rounding in Java?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • BUG: Large floating point numbers convert to the wrong integer

    Hi,
    When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
    This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
    Cheers,
    Steen
    CLA, CTA, CLED & LabVIEW Champion
    Solved!
    Go to Solution.

    Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
    I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
    I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
    Is there a compliancy issue with EXT?
    While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
    Is there a genuine conversion error from EXT to U64?
    The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
    --> Decimal: 18446744073709549568
    The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
    --> Decimal: 18446744073709550592
    The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
    /Steen
    CLA, CTA, CLED & LabVIEW Champion

  • Check Floating Point Number

    Hello All,
    I am having some trouble checking the value of a field with Key Figure type Number with 8 byte floating point. I want to read that field and populate another field with an X if true. For example if that field is equal to 5,0000000000000000E+07 then i want to mark the other field with an 'X'.
    The problem is in my code, how do i read that number in the fltp field, such as the number above. my code reads as follows for the 'X' field.
        if SOURCE_FIELDS-abc123 eq 5000000.
          RESULT = 'X'.
        endif.
    Thanks everyone in advance

    You don't need to worry about converting the code into standard format or floating, just implement your code as you want and it will automatically take care of the conversion. Basically 5,0000000000000000E+07 = 50,000,000.
    thanks.
    Wond

  • Floating Point Arithmatic Error

    Hi,
    I know actionscript represents numbers and double precision
    floating point values. I'm having a problem where double arithmatic
    in actionscript doesn't match the results of the same double
    arithmatic in C++ / C#.
    EXAMPLE:
    In C++ / C#:
    double x, y, x1, y1;
    x = 209.4;
    y = 148.8;
    x1 = 203.0;
    y1 = 145.0;
    double ddx = x - x1;
    double ddy = y - y1;
    RESULT
    ddx: 6.4000000000000057
    ddy: 3.8000000000000114
    In Flash ActionScipt 2:
    var x, y, x1, y1;
    x = 209.4;
    y = 148.8;
    x1 = 203.0;
    y1 = 145.0;
    var ddx = x - x1;
    var ddy = y - y1;
    RESULT
    ddx: 6.39999999999992
    ddy: 3.80000000000024
    After researching Flash / Actionscript "var" stores numerical
    values as doubles ( 8 bytes ) just like doubles are stored in C++ /
    C# ( 8 bytes ). Why would there be a difference between the results
    of ddx and ddy? Are there different implementations of double
    floating point math? If so, Is there a way I can mimic the Flash /
    Actionscript version in C++ / C#?
    Any help would be great!
    Thanks!

    Hmmm, so you're saying the actual binary representation is
    the same but they're just displayed differently?

  • Floating point

    i understand that floating point numbers are represented in java at binary numbers..
    and that there are decimal values that cannot exactly representable in binary..
    one of that i think is 0.1
    but why is it that 0.1 can be represented exactly but in the computation it cannot.
    i have this program..
    public class CTest
        public static void main(String[] args)
              double d = 1.0;
              double d2 = 0.9;
              double d3 = 0.1;
              System.out.println(""+(d-d2));
              System.out.println(""+d3);
    }please help..
    thanks

    I mean this one..
    class D {     
         public static void main(String[] args){          
             double d = 0.1;     
             double d2 = 1.0 - 0.9;     
             System.out.println("d: " + d);          
            System.out.println("d: " + d2);     
    [\code]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • Batch rename with minimum numeber of digits in sequence number

    I'm pretty sure I remember this working how I wanted in previous versions of Bridge, but at least since cs4, it has not worked as I would like. I've tried searching for an answer or a setting that I'm missing, but have come back empty-handed. Here's

  • Adding a plug-in to just one audio file?

    Hello, I'm really new to Logic Pro and I was curious if there is a way to add a plug-in to just one audio file in a track and not the whole track?  Thanks!

  • Responsive nav bar not working correctly

    Hi All, The responsive nab bar on the site below is working correctly. When it changes from horizontal buttons to vertical, the sub menu buttons do not display correctly and i cannot work out how to correct it, they should display underneath the menu

  • IMac mid-2011 randomly shuts down.

    Hello, My iMac was fine until yesterday. It all of a sudden started randomly shutting down. It looks like a power loss. As if i just pulled the power out of the wall outlet. After it shuts down, the fans start up again but the screen does not come on

  • Instrument I/O Assistant blank

    I have been having problems getting a code to work. So, I purchased LabVIEW for Everyone (a book) to see if I could figure out how to get the code to run right.  (here is a link to that thread ) I just redownloaded and installed NI-Visa. The book say