Multiply floating point rounds

Why is it that when I use the Multiply function with two floating point numbers, that it rounds off the result?
I have the Format/Precision set to 3 decimals on the indicator.
The inputs are both doubles.
Using a probe before the indicator shows that the result is rounded by the mulitply function itself.
Attachments:
Multiply Rounding.vi ‏7 KB

Wes_OH wrote:
I have the Format/Precision set to 3 decimals on the indicator.
NO!
Your indicator is set to six digits of precision, thus shows only six significant digits and that's exactly what you get (324975). If you would set ot to 3 signiificant digits, it would display as (325000).
If you want to show a certain number of digits after the decimal point, you need to set the "precision type" to "Digits of Precision", not "significant digits", as you have it set now. Try it!
Also don't trust any probes or indicators, they never "round", i.e. never change the data. The displayed precision is just cosmetic and does NOT change the underlying data that is carried in the wire, which is always full precision. If you want a probe with 10 decimal digits, create a custom probe.
If you want to round, you need to do it in code.
Message Edited by altenbach on 02-05-2007 08:37 AM
LabVIEW Champion . Do more with less code and in less time .
Attachments:
digits.png ‏25 KB

Similar Messages

  • Float point round issue

    hi
      here is a problem: i have created cubes, transfer role, update role etc...
    and i transport these elements from develop server to production server.
    In develop server, when i upload data, the data float point would not be changed,
    not be rounded, but in production server, the float data has been rounded.
    and the transfer role, update role are the same between develop and production server, can someone helps me? hunger for your advice, thanks !!

    thanks,but that place is for Decimal Notation.i mean the data has been rounded in production server ,
    eg :
    before upload
    123.678
    after upload
    124
    so i think the decimal notation setting isn't the point.

  • Floating point rounding error

    I've been working on an egyptian fraction program and for some reason I cant seem to figure out a way to fix this rounding error. An egyptian fraction is a fraction that can be expressed as a sum of fractions eg. 3/4 = 1/2 + 1/4. For some reason on certain fractions it rounds up and skips the correct fraction to subtract. When I run 2/7 its supposed to equal 1/4 + 1/28, but gives me 1/4 + 1/29 and decides to round. This is my code for the problem.
    public class EgyptianFraction{
        private static double epsilon = 1.0e-7;
        public static void main(String args[]){
            greedySearch(2.0/7.0);
        public static void greedySearch(double fraction){
            for(int i = 2; fraction > epsilon; i++){
                if(fraction - (1.0/i) >= 0){
                    fraction -= (1.0/i);
    //*****Output******
    //0.0357142857142857 - 0.03571428571428571 = -1.3877787807814457E-17
    }When I print out all of the math involved it says that it gives the output above. They are fairly close but for some reason it makes the 1/28 bigger then the current fraction. The program should subtract 1/28 and then the fraction should be close enough to 0 and end. Is there any way you guys can think of to fix this problem?

    You have given the error as epsilon, so you can't expect `fraction - 1.0/i >= 0` to give an exact answer.
    Given your error is +/- epsilon, this expression should be
    fraction - 1.0/i >= epsilon || fraction - 1.0/i >= -epsilon
    or just
    fraction - 1.0/i >= -epsilon
    If you change this you get 1/4 + 1/28 as the answer,

  • How does Java store floating point numbers?

    Hello
    I'm writing a paper about floating point numbers in which I compare an IEEE-754 compatible language [c] with Java. I read that Java can do a conversion decimal->binary->decimal and retain the same value whereas c can't. I found several documents discussing the pros and cons of that but I can't find any information about how it is implemented.
    I hope someone can explain it to me, or post a link to a site explaining it.
    Cheers
    Huttu

    So it is a myth.
    I still ask because I observed a oddity: When I store 1.4 in c and printf( %2.20f\n",a); it I get 1.39999999999999991118. If I do the same in Java with System.out.printf( %2.20f\n",a); I get 1.4. If I multiply the variable with itself I get 1.95999999999999970000:
    double a=1.4;
    a=a*a;
    System.out.printf( %2.20f\n",a);
    {code}
    Does this happen because of the rounding in Java?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • 128-bit floating point numbers on new AMD quad-core Barcelona?

    There's quite a lot of buzz over at Slashdot about the new AMD quad core chips, announced yesterday:
    http://hardware.slashdot.org/article.pl?sid=07/02/10/0554208
    Much of the excitement is over the "new vector math unit referred to as SSE128", which is integrated into each [?!?] core; Tom Yager, of Infoworld, talks about it here:
    Quad-core Opteron? Nope. Barcelona is the completely redesigned x86, and it’s brilliant
    Now here's my question - does anyone know what the inputs and the outputs of this coprocessor look like? Can it perform arithmetic [or, God forbid, trigonometric] operations [in hardware] on 128-bit quad precision floats? And, if so, will LabVIEW be adding support for it? [Compare here versus here.]
    I found a little bit of marketing-speak blather at AMD about "SSE 128" in this old PDF Powerpoint-ish presentation, from June of 2006:
    http://www.amd.com/us-en/assets/content_type/DownloadableAssets/PhilHesterAMDAnalystDayV2.pdf
    WARNING: PDF DOCUMENT
    Page 13: "Dual 128-bit SSE dataflow, Dual 128-bit loads per cycle"
    Page 14: "128-bit SSE and 128-bit Loads, 128b FADD, 128 bit FMUL, 128b SSE, 128b SSE"
    etc etc etc
    While it's largely just gibberish to me, "FADD" looks like what might be a "floating point adder", and "FMUL" could be a "floating point multiplier", and God forbid that the two "SSE" units might be capable of computing some 128-bit cosines. But I don't know whether that old paper is even applicable to the chip that was released yesterday, and I'm just guessing as to what these things might mean anyway.
    Other than that, though, AMD's main website is strangely quiet about the Barcelona announcement. [Memo to AMD marketing - if you've just released the greatest thing since sliced bread, then you need to publicize the fact that you've just released the greatest thing since sliced bread...]

    I posted a query over at the AMD forums, and here's what I was told.
    I had hoped that e.g. "128b FADD" would be able to do something like the following:
    /* "quad" is a hypothetical 128-bit quad precision  */
    /* floating point number, similar to "long double"  */
    /* in recent versions of C++:                       */
    quad x, y, z;
    x = 1.000000000000000000000000000001;
    y = 1.000000000000000000000000000001;
    /* the hope was that "128b FADD" could perform the  */
    /* following 128-bit addition in hardware:          */
    z = x + y;
    However, the answer I'm getting is that "128b FADD" is just a set of two 64-bit adders running in parallel, which are capable of adding two vectors of 64-bit doubles more or less simultaneously:
    double x[2], y[2], z[2];
    x[0] = 1.000000000000000000000000000001;
    y[0] = 1.000000000000000000000000000001;
    x[1] = 2.000000000000000000000000000222;
    y[1] = 2.000000000000000000000000000222;
    /* Apparently the coordinates of the two "vectors" x & y       */
    /* can be sent to "128b FADD" in parallel, and the following   */
    /* two summations can be computed more or less simultaneously: */
    z[0] = x[0] + y[0];
    z[1] = x[1] + y[1];
    Thus e.g. "128b FADD", working in concert with "128b FMUL", will be able to [more or less] halve the amount of time it takes to compute a dot product of vectors whose coordinates are 64-bit doubles.
    So this "128-bit" circuitry is great if you're doing lots of linear algebra with 64-bit doubles, but it doesn't appear to offer anything in the way of greater precision for people who are interested in precision-sensitive calculations.
    By the way, if you're at all interested in questions of precision sensitivity & round-off error, I'd highly recommend Prof Kahan's page at Cal-Berzerkeley:
    http://www.cs.berkeley.edu/~wkahan/
    PDF DOCUMENT: How JAVA's Floating-Point Hurts Everyone Everywhere
    http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
    PDF DOCUMENT: Matlab's Loss is Nobody's Gain
    http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf

  • SQL Loader and Floating Point Numbers

    Hi
    I have a problem loading floating point numbers using SQL Loader. If the number has more than 8 significant digits SQL Loader rounds the number i.e. 1100000.69 becomes 1100000.7. The CTL file looks as follows
    LOAD DATA
    INFILE '../data/test.csv' "str X'0A'"
    BADFILE '../bad/test.bad'
    APPEND
    INTO TABLE test
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    Amount CHAR
    and the data file as follows
    "100.15 "
    "100100.57 "
    "1100000.69 "
    "-2000000.33"
    "-100000.43 "
    the table defined as follows
    CREATE TABLE test
    Amount number(15,4)
    ) TABLESPACE NNUT050M1;
    after loading a select returns the following
    100.15
    100100.57
    1100000.7
    -2000000
    -100000.4
    Thanks in advance
    Russell

    Actually if you format the field to display as (say) 999,999,999.99, you will see the correct numbers loaded via SQL Loader.
    null

  • Floating point precision of "Flatten to XML"

    It appears that the "Flatten to XML" function (LV 7.1.1) truncates floating point numbers to 5 decimal places.  This is a rather annoying limitation, since I'm trying to store a relative time in hours, accurate to the second (chosen by a previous coder that I have to be compatible with - otherwise I'd just use seconds).  Is there a workaround to this?  (other than multiplying by some power of 10 before flattening, and dividing after unflattening)
    Jaegen

    Hi Paul and Jaegen,
    I checked our databases and found entries of product suggestions and
    corrective action requests for the behavior of the limited precision
    when flattening to XML. I found an interesting reply from a LabVIEW
    developer on the request for further precision:
    The Flatten To XML primitive puposefully cuts off all numbers at 5
    digits after the decimal. There are 3 main reasons for this:
    Information regarding precision is not propagated on the wire.
    Therefore, there is no real way to know how many significant digits or
    even places past the decimal point is appropriate when data is
    flattened to XML.
    Bloat. If all floating point values printed all of the possible
    decimal digits all of the time, this would provide for some very large
    blocks of XML code.
    Given the arbitrarily complex nature of LabVIEW data, it is
    difficult to provide a method for specifying precision. For example, if
    a user has a cluster of clusters, each of which contain a single,
    double and extended representing various measurements of differing
    accuracy, how can one precision setting be applied to each of these
    values? The user would have to unbundle (and index if an array was
    involved), flatten, concatenate, and then the reverse on the unflatten
    side.
    I suggest that you go ahead and file a new product suggestion by using the "feedback" link on www.ni.com/contact.
    It would be best if you could give some detailed information on how you
    would like LabVIEW to handle different scenarios while getting around
    the above issues.
    Thanks for the feedback!
    - Philip Courtois, Thinkbot Solutions

  • Floating point formats: Java/C/C++, PPC and Intel platforms

    Hi everyone
    Where can I find out about the various bit formats used for 32 bit floating numbers in Java and C/C++ for both Mac hardware platforms?
    I'm developing a Java audio application which needs to convert vast quantities of variable width integer audio samples to canonical float audio format. I've discovered that a floating point divide by the maximum integer value gives the correct answer but takes too much processor time, so I'm trying out bit-twiddling in C via JNI to carve out my own floating point bit patterns. This is very fast, however, I need to take into account the various float formats used on the different platforms so my app can be universal. Can anyone point me to the information?
    Thanks in advance.
    Bob

    I am not sure that Rosetta floating point works the same as PPC floating point. I was using RealBasic (a PPC basic compiler) and moved one of the my compiled applications to a MacBook Pro and floating point comparisons that had been exact on the PPC stopped working under Rosetta. I changed the code to do an approximate comparison (i.e. abs(a -b) < tolerance) and this fixed things.
    I reported the problem to the RealBasic people and thought nothing more of it until I fired up Adobe's InDesign and not being used to working with picas, changed the units of measurement to inches. The default letter paper size was suddenly 8.5000500050005 inches instead of the more usual 8.5! This was not a big problem, but it appears that all of InDesign's page math is running into some kind of rounding errors.
    The floating point format is almost certainly IEEE, and I cannot imagine Rosetta doing anything other than using native hardware Intel floating point. On the other hand, there is a subtle difference in behavior.
    I am posting this here as a follow up, but I am also going to post this as a proper question in the forum. If you have to delete one or the other of these duplicate posts, please zap the reply, not the question.

  • [Solved] Bash and Floating point arithmetic

    I didn't realize how troublesome floating point numbers can be until now.
    What I want to do should be simple I dare say:
    properRounding( ( currentTime - downloadTime ) / ( dueTime - downloadTime ) * 100 )
    however best I've been able to achieve so far is this:
    echo "($currentTime-$downloadTime)/($dueTime-$downloadTime)*100" | bc -l
    Which prints the correct floating point value.
    I've tried to put the result in a variable, but I must be doing it wrong as I get the most peculiar error. I can live without it, but it would make life easier.
    As for the rounding, that is a must. I've read that if you remove the -l param from bc, then it will round, but in my case something goes wrong as I just get the value 0 in return and besides, concluded after a simple test, bc always rounds down as in integer division, which I can not use.
    So of course I'll continue reading and hopefully someday arrive at a solution, but I would very much appreciate if someone could lend me a hand.
    This after all is not just a learning experience, I'm trying to create something useful.
    Best regards.
    edit:
    nb:
    all variables are integers.
    Last edited by zacariaz (2012-09-09 14:50:18)

    Just for the fun of it, here's my progress thus far... Well, there really isn't much more to do. The rest is a conky thing.
    #!/bin/bash
    # Variables from unitinfo.txt - date as unix timestamps.
    dueTime="$(date +%s -d "$(grep 'Due time: ' ~/unitinfo.txt | cut -c11-)")"
    if [ "$1" = "end" ]
    then echo $dueTime
    fi
    downloadTime="$(date +%s -d "$(grep 'Download time: ' ~/unitinfo.txt | cut -c16-)")"
    if [ "$1" = "start" ]
    then echo $downloadTime
    fi
    progress="$(grep 'Progress: ' ~/unitinfo.txt | cut -c11-12 | sed 's/ *$//')"
    if [ "$1" = "prog1" ]
    then echo $progress
    fi
    # The rest
    #progress valued 0-1 for use with conky proress bars
    progress2=$( echo 2k $progress 100 / f | dc )
    if [ "$1" = "prog2" ]
    then echo $progress2
    fi
    # Current time - unix timestamp.
    currentTime="$(date +%s)"
    # Remaining time - unix timestamp
    remainingTime=$(( dueTime-$currentTime ))
    if [ "$1" = "remain" ]
    then echo $remainingTime
    fi
    # Elapsed time - unix timestamp
    elapsedTime=$(( currentTime-downloadTime ))
    if [ "$1" = "elap1" ]
    then echo $elapsedTime
    fi
    # Total amount of time available - unix timestamp
    totalTime=$(( dueTime-downloadTime ))
    if [ "$1" = "total" ]
    then echo $totalTime
    fi
    # How much time has elapsed in percent
    progress3="$(echo 3k $elapsedTime $totalTime / 100 \* 0k 0.5 + 1 / f | dc)"
    if [ "$1" = "elap2" ]
    then echo $progress3
    fi
    # Like the above bur 0-1
    progress4=$( echo 2k $progress3 100 / f | dc )
    if [ "$1" = "elap3" ]
    then echo $progress4
    fi
    # In percent, expected completion vs $dueTime - less than 100 is better.
    expectedCompletion="$( echo 3k $elapsedTime 10000 \* $progress / $totalTime / 0k 0.5 + 1 / f | dc )"
    if [ "$1" = "exp1" ]
    then echo $expectedCompletion
    fi
    # Same as above, but unix timestamp
    expectedCompletion2=$(( downloadTime+(expectedCompletion*totalTime/100) ))
    if [ "$1" = "exp2" ]
    then echo $expectedCompletion2
    fi
    #efficiency
    if [ "$1" = "ef1" ]; then
    if [ $progress -lt $progress3 ]
    then echo "you're behind schedule."
    elif [ $progress -eq $progress3 ]
    then echo "You're right on schedule."
    else
    echo "You're ahead of schedule."
    fi
    fi
    if [ "$1" = "ef2" ]; then
    if [ $expectedCompletion -gt 100 ]
    then echo "You're not going to make it!"
    else
    echo "You're going to make it!"
    fi
    fi

  • Designing for floating point error

    Hello,
    I am stuck with floating point errors and I'm not sure what to do. Specifically, to determine if a point is inside of a triangle, or if it is on the exact edge of the triangle. I use three cross products with the edge as one vector and the other vector is from the edge start to the query point.
    The theory says that if the cross product is 0 then the point is directly on the line. If the cross product is <0, then the point is inside the triangle. If >0, then the point is outside the triangle.
    To account for the floating point error I was running into, I changed it from =0 to abs(cross_product)<1e-6.
    The trouble is, I run into cases where the algorithm is wrong and fails because there is a point which is classified as being on the edge of the triangle which isn't.
    I'm not really sure how to handle this.
    Thanks,
    Eric

    So, I changed epsilon from 1e-6 to 1e-10 and it seems to work better (I am using doubles btw). However, that doesn't really solve the problem, it just buries it deeper. I'm interested in how actual commercial applications (such as video games or robots) deal with this issue. Obviously you don't see them giving you an error every time a floating point error messes something up. I think the issue here is that I am using data gathered from physical sensors, meaning the inputs can be arbitrarily close to each other. I am worried though that if I round the inputs, that I will get different data points with the exact same x and y value, and I'm not sure how the geometry algorithms will handle that. Also, I am creating a global navigation mesh of triangles with this data. Floating point errors that are not accounted for correctly lead to triangles inside one another (as opposed to adjacent to each other), which damages the integrity of the entire mesh, as its hard to get your program to fix its own mistake.
    FYI:
    I am running java 1.6.0_20 in Eclipse Helios with Ubuntu 10.04x64
    Here is some code that didn't work using 1e-6 for delta. The test point new Point(-294.18294451166435,-25.496614108304477), is outside the triangle, but because of the delta choice it is seen as on the edge:
    class Point
         double x,y;
    class Edge
         Point start, end;
    class Triangle
         Edge[] edges;
         public Point[] getOrderedPoints() throws Exception{
              Point[] points = new Point[3];
              points[0]=edges[0].getStart();
              points[1]=edges[0].getEnd();
              if (edges[1].getStart().equals(points[0]) || edges[1].getStart().equals(points[1]))
                   points[2]=edges[1].getEnd();
              else if (edges[1].getEnd().equals(points[0]) || edges[1].getEnd().equals(points[1]))
                   points[2]=edges[1].getStart();
              else
                   throw new Exception("MalformedTriangleException\n"+this.print());
              orderNodes(points);
              return points;
            /** Orders node1 node2 and node3 in clockwise order, more specifically
          * node1 is swapped with node2 if doing so will order the nodes clockwise
          * with respect to the other nodes.
          * Does not modify node1, node2, or node3; Modifies only the nodes reference
          * Note: "order" of nodes 1, 2, and 3 is clockwise when the path from point
          * 1 to 2 to 3 back to 1 travels clockwise on the circumcircle of points 1,
          * 2, and 3.
         private void orderNodes(Point[] points){
              //the K component (z axis) of the cross product a x b
              double xProductK = crossProduct(points[0],points[0], points[1], points[2]);
              /*        (3)
               *          +
               *        ^
               *      B
               * (1)+             + (2)
               *       ------A-->
               * Graphical representation of vector A and B. 1, 2, and 3 are not in
               * clockwise order, and the x product of A and B is positive.
              if(xProductK > 0)
                   //the cross product is positive so B is oriented as such with
                   //respect to A and 1, 2, 3 are not clockwise in order.
                   //swapping any 2 points in a triangle changes its "clockwise order"
                   Point temp = points[0];
                   points[0] = points[1];
                   points[1] = temp;
    class TriangleTest
         private double delta = 1e-6;
         public static void main(String[] args)  {
                    Point a = new Point(-294.183483785282, -25.498196740397056);
              Point b = new Point(-294.18345625812026, -25.49859505161433);
              Point c = new Point(-303.88217906116796, -63.04183512930035);
              Edge aa = new Edge (a, b);
              Edge bb = new Edge (c, a);
              Edge cc = new Edge (b, c);
              Triangle aaa = new Triangle(aa, bb, cc);
              Point point = new Point(-294.18294451166435,-25.496614108304477);
              System.out.println(aaa.enclosesPointDetailed(point));
          * Check if a point is inside this triangle
          * @param point The test point
          * @return     1 if the point is inside the triangle, 0 if the point is on a triangle, -1 if the point is not is the triangle
          * @throws MalformedTriangleException
         public int enclosesPointDetailed(LocalPose point, boolean verbose) throws Exception
              Point[] points = getOrderedPoints();          
              int cp1 = crossProduct(points[0], points[0], points[1], point);
              int cp2 = crossProduct(points[1], points[1], points[2], point);
              int cp3 = crossProduct(points[2], points[2], points[0], point);
              if (cp1 < 0 && cp2 <0  && cp3 <0)
                   return 1;
              else if (cp1 <=0 && cp2 <=0  && cp3 <=0)
                   return 0;
              else
                   return -1;
             public static int crossProduct(Point start1, Point start2, Point end1, POint end2){
              double crossProduct = (end1.getX()-start1.getX())*(end2.getY()-start2.getY())-(end1.getY()-start1.getY())*(end2.getX()-start2.getX());
              if (crossProduct>floatingPointDelta){
                   return 1;
              else if (Math.abs(crossProduct)<floatingPointDelta){
                   return 0;
              else{
                   return -1;
    }

  • The BIG failure of the floating point computation

     The Big failure of the floating point computation .... Mmmm
    You are writing some code .. writing a simple function ... and using type Double for your computation. 
    You are then expecting to get a result from your function that is at least close to the exact value ... 
    Are you right ??
    Let see this from an example.
    In my example, I will approximate the value of pi. To do so, I will inscribe a circle into a polygon, starting with an hexagon, and compute the half perimeter of the polygon. this will give me an approximation for pi.
    Then I will in a loop doubling the number of sides of that polygon. Therefore, each iteration will give me a better approximation.
    I will perform this twice, using the same algorithm and the same equation ... with the only difference that I will write that equation in two different form 
    Since I don't want to throw at you equations and algorithm without explanation, here the idea:
    (I wrote that with Word to make it easier to read)
    ====================
    Simple enough ... 
    It is important to understand that the two forms of the equation are mathematically always equal for a given value of "t" ... Since it is in fact the exact same equation written in two different way.
    Now let put these two equations in code
    Public Class Form1
    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    RichTextBox1.Font = New Font("Consolas", 9)
    RichTextBox2.Font = New Font("Consolas", 9)
    TextBox1.Font = New Font("Consolas", 12)
    TextBox1.TextAlign = HorizontalAlignment.Center
    TextBox1.Text = "3.14159265358979323846264338327..."
    Dim tt As Double
    Dim Pi As Double
    '===============================================================
    'Using First Form of the equation
    '===============================================================
    'start with an hexagon
    tt = 1 / Math.Sqrt(3)
    Pi = 6 * tt
    PrintPi(Pi, 0, RichTextBox1)
    'Now we will double the number of sides of the polygon 25 times
    For n = 1 To 25
    tt = (Math.Sqrt((tt ^ 2) + 1) - 1) / tt
    Pi = 6 * (2 ^ n) * tt
    PrintPi(Pi, n, RichTextBox1)
    Next
    '===============================================================
    'Using Second Form of the equation
    '===============================================================
    'start with an hexagon
    tt = 1 / Math.Sqrt(3)
    Pi = 6 * tt
    PrintPi(Pi, 0, RichTextBox2)
    'Now we will double the number of sides of the polygon 25 times
    For n = 1 To 25
    tt = tt / (Math.Sqrt((tt ^ 2) + 1) + 1)
    Pi = 6 * (2 ^ n) * tt
    PrintPi(Pi, n, RichTextBox2)
    Next
    End Sub
    Private Sub PrintPi(t As Double, n As Integer, RTB As RichTextBox)
    Dim S As String = t.ToString("#.00000000000000")
    RTB.AppendText(S & " " & Format((6 * (2 ^ n)), "#,##0").PadLeft(13) & " sides polygon")
    Dim i As Integer = 0
    While S(i) = TextBox1.Text(i)
    i += 1
    End While
    Dim CS = RTB.GetFirstCharIndexFromLine(RTB.Lines.Count - 1)
    RTB.SelectionStart = CS
    RTB.SelectionLength = i
    RTB.SelectionColor = Color.Red
    RTB.AppendText(vbCrLf)
    End Sub
    End Class
    The results:
      The text box contains the real value of PI.
      The set of results on the left were obtain with the first form of the equation .. on the right with the second form of the equation
      The red digits show the digits that are exact for pi.
    On the right, where we used the second form of the equation, we see that the result converge nicely toward Pi as the number of sides of the polygon increases.
    But on the left, with the first form of the equation, we see the after just a few iterations, the function stop converging and then start diverging from the expected value.
    What is wrong ... did I made an error in the first form of the equation?  
    Well probably not since this first form of the equation is the one you will find in your math book.
    So, what up here ??
    The problem is this: 
         What is happening is that at each iteration when using the first form, I subtract 1 from the radical, This subtraction always gives a result smaller than 1. Since the type double has a fixed number of digits on the left of the decimal
    point, at each iteration I am loosing precision caused by rounding.
      And after only 25 iterations, I have accumulate such a big rounding error that even the digit on the left of the decimal point is wrong.
    When using the second form of the equation, I am adding 1 to the radical, therefore the value grows and I get no lost of precision.
    So, what should we learn from this ?
       Well, ... we should at least remember that when using floating point to compute a formula, even a simple one, as I show here, we should always check the exactitude of the result. There are some functions that a floating point is unable to evaluate.

    I manually (yes, manually) did the computations with calc.exe. It has a higher accuracy. PI after 25 iterations is 3.1415926535897934934541990520762.
    This means tt = 0.000000015604459512183037864437694609544  compared to 0.0000000138636291675699 computed by the code.
    Armin
    Manually ... 
      You did better than Archimedes.   
      He only got to the 96 sides polygon and gave up. Than he said that PI was 3.1427

  • Determining whether an integer is a floating-point number

    I want to create a program that when an integer is input into the numeric constant, a floating-point number with all zeros to the right of its decimal point will light the LED on the front panel. The LED will remain unlit for any other floating-point number. I know I want to use a Round to Nearest function, but not sure where to go from there. 

    rtufaro wrote:
    I want to create a program that when an integer is input into the numeric constant,
    You mean CONTROL, right?
    You just need a type of rounding.  Doesn't matter if you round up or down.  So you just round and then compare the input to the rounded value.  If they are equal, you light up your LED.  So all you need is a numeric control, 2 functions, and a boolean indicator.

  • How to restrict the decimal place of a floating point number?

    Hi,
    Here is my code:
    public void TwoDecimal(double u){
         String w = Double.toString(u);
         int c = w.length();
         System.out.println(c);
         if (c <= 5)
            double a = Double.parseDouble(w);
            System.out.println(a);
         else
            System.out.println("Invalid input!");
      }I want to show a floating point number which has 2 digits and 2 decimal places, e.g. 45.82, 29.67. This number is input by user and passed as a parameter.
    For those case like the above sample floating point numbers, it can display the proper value of 'c'. e.g. 45.67 will display 5.
    However, when I passed 99999, it will show 7; 9999 will return 6, not 5.
    So, if the user does not input the '.', does it append 2 implicit chars to it? i.e. 99999.0 and 9999.0. So, that's why it returned 7 and 6 for the length of the string respectively.
    How can I fix it?
    and
    Does it has better algorithm?
    Pls advise.
    gogo

    When dealing with a known precision, in your case hundredths, it is often a good idea to use an integer type and add in the decimals on printing only. This is often the case in banking systems. Almost all of them use integer types, (read long) in pennies to store monitary values. Ever seen someone type in a value for a credit card machine? For something like $20 they press.. "2" "0" "0" "0" The machine knows the lowest denomonation in a cent, so it knows where to put the decimal place. I suggest you do something like this. It also helps to avoid base 2 round off errors.
    -Spinoza

  • Floating point operations is slower when small values are used?

    I have the following simple program that multiplies two different floating point numbers many times. As you can see, one of the numbers is very small. When I calculate the time of executing both multiplications, I was surprised that the little number takes much longer than the other one. It seems that working with small doubles is slower... Does anyone know what is happening?
    public static void main(String[] args) throws Exception
            long iterations = 10000000;
            double result;
            double number = 0.1D;
            double numberA = Double.MIN_VALUE;
            double numberB = 0.0008D;
            long startTime, endTime,elapsedTime;
            //Multiply numberA
            startTime = System.currentTimeMillis();
            for(int i=0; i < iterations; i++)
                result = number * numberA;
            endTime = System.currentTimeMillis();
            elapsedTime = endTime - startTime;
            System.out.println("
            System.out.println("Number A)
    Time elapsed: " + elapsedTime + " ms");
            //Multiply numberB
            startTime = System.currentTimeMillis();
            for(int i=0; i < iterations; i++)
                result = number * numberB;
            endTime = System.currentTimeMillis();
            elapsedTime = endTime - startTime;
            System.out.println("
            System.out.println("Number B)
    Time elapsed: " + elapsedTime + " ms");
        } Result:
    Number A) Time elapsed: 3546 ms
    Number B) Time elapsed: 110 ms
    Thanks,
    Diego

    Verrry interrresting... After a few tweaks (sum & print multiplication result to prevent Hotspot from removing the entire loop, move stuff to one method to avoid code alignment effects or such, loop to get Hotspot compile everything; code below),
    I find that "java -server" gives the same times for both the small and the big value, whereas "java -Xint" and "java -client" exhibit the unsymmetry. So should I conclude that my CPU floating point unit treats both values the same, but the client/server compilers do something ...what?
    (You may need to add or remove a zero in "iterations" so that you get sane times with -client and -server.)
    public class t
        public static void main(String[] args)
         for (int n = 0; n < 10; n++) {
             doit(Double.MIN_VALUE);
             doit(0.0008D);
        static void doit(double x)
            long iterations = 100000000;
            double result = 0;
            double number = 0.1D;
            long start = System.currentTimeMillis();
            for (int i=0; i < iterations; i++)
                result += number * x;
            long end = System.currentTimeMillis();
            System.out.println("time for " + x + ": " + (end - start) + " ms, result " + result);
    }

  • Floating point Number & Packed Number

    Hai can anyone tell me what is the difference in using floating point & packed Number .
    when it will b used ?

    <b>Packed numbers</b> - type P
    Type P data allows digits after the decimal point. The number of decimal places is generic, and is determined in the program. The value range of type P data depends on its size and the number of digits after the decimal point. The valid size can be any value from 1 to 16 bytes. Two decimal digits are packed into one byte, while the last byte contains one digit and the sign. Up to 14 digits are allowed after the decimal point. The initial value is zero. When working with type P data, it is a good idea to set the program attribute Fixed point arithmetic.Otherwise, type P numbers are treated as integers.
    You can use type P data for such values as distances, weights, amounts of money, and so on.
    <b>Floating point numbers</b> - type F
    The value range of type F numbers is 1x10*-307 to 1x10*308 for positive and negative numbers, including 0 (zero). The accuracy range is approximately 15 decimals, depending on the floating point arithmetic of the hardware platform. Since type F data is internally converted to a binary system, rounding errors can occur. Although the ABAP processor tries to minimize these effects, you should not use type F data if high accuracy is required. Instead, use type P data.
    You use type F fields when you need to cope with very large value ranges and rounding errors are not critical.
    Using I and F fields for calculations is quicker than using P fields. Arithmetic operations using I and F fields are very similar to the actual machine code operations, while P fields require more support from the software. Nevertheless, you have to use type P data to meet accuracy or value range requirements.
    reward if useful

Maybe you are looking for