Floating point precision of "Flatten to XML"

It appears that the "Flatten to XML" function (LV 7.1.1) truncates floating point numbers to 5 decimal places.  This is a rather annoying limitation, since I'm trying to store a relative time in hours, accurate to the second (chosen by a previous coder that I have to be compatible with - otherwise I'd just use seconds).  Is there a workaround to this?  (other than multiplying by some power of 10 before flattening, and dividing after unflattening)
Jaegen

Hi Paul and Jaegen,
I checked our databases and found entries of product suggestions and
corrective action requests for the behavior of the limited precision
when flattening to XML. I found an interesting reply from a LabVIEW
developer on the request for further precision:
The Flatten To XML primitive puposefully cuts off all numbers at 5
digits after the decimal. There are 3 main reasons for this:
Information regarding precision is not propagated on the wire.
Therefore, there is no real way to know how many significant digits or
even places past the decimal point is appropriate when data is
flattened to XML.
Bloat. If all floating point values printed all of the possible
decimal digits all of the time, this would provide for some very large
blocks of XML code.
Given the arbitrarily complex nature of LabVIEW data, it is
difficult to provide a method for specifying precision. For example, if
a user has a cluster of clusters, each of which contain a single,
double and extended representing various measurements of differing
accuracy, how can one precision setting be applied to each of these
values? The user would have to unbundle (and index if an array was
involved), flatten, concatenate, and then the reverse on the unflatten
side.
I suggest that you go ahead and file a new product suggestion by using the "feedback" link on www.ni.com/contact.
It would be best if you could give some detailed information on how you
would like LabVIEW to handle different scenarios while getting around
the above issues.
Thanks for the feedback!
- Philip Courtois, Thinkbot Solutions

Similar Messages

  • Precision of Flatten to XML

    In LabVIEW 6.1 the Flatten to XML function seems to truncate double precision data to 5 decimal places. Is there a way to extend the precision to 10 or 15 decimal places?

    I know that this thread is quite out of date, but afaik there is still no solution for this problem.
    I wrote a .NET dll to parse and reformat the LabView-Xml to a more human-readable form, and also not type specific (only specific on the control names). I thought the double-precision problem was already solved, but it is definitely not. Is there any reason that the precision is always truncated? As Xml nowadays is a common way to share data, for me it was no question that LabView supports it completely, but now I know it doesnt. Is there any workaround? I dont want to use OpenG as its to slow, and also not localization-independent; it shouldnt be a problem just to extend the precision to the maximum possible, printf() always supported this.

  • Problem with floating point precision

    Hello!
    I need to store in a variable a number like this 1234512345.123412345.
    But when I store it on a double or long double... it gets truncated to 1234512345.123412... only six decimals?
    What can I do??
    Thanks!

    If you are going to use the BigDecimal class, it is worth bearing in mind a tip that was passed on to me from georgemc. If you create the BigDecimal class instances using the constructor that accepts a String as input, then you can easily specify how many digits should follow the decimal point.

  • Precision loss - conversions between exact values and floating point values

    Hi!
    I read this in your SQL Reference manual, but I don't quite get it.
    Conversions between exact numeric values (TT_TINYINT, TT_SMALLINT, TT_INTEGER, TT_BIGINT, NUMBER) and floating-point values (BINARY_FLOAT, BINARY_DOUBLE) can be inexact because the exact numeric values use decimal precision whereas the floating-point numbers use binary precision.
    Could you please give two examples: one where a TT_TINYINT is converted to a BINARY_DOUBLE and one when a TT_BIGINT is converted into a DOUBLE, both cases give examples on lost precision? This would be very helpful.
    Thanks!
    Sune

    chokpa wrote:
    Public Example (float... values){}
    new Example (1, 1e2, 3.0, 4.754);It accepts it if I just use 1,2,3,4 as the values being passed in, but doesn't like it if I use actual float values.Those are double literals, try
    new Example (1f, 1e2f, 3.0f, 4.754f);

  • Single Precision Floating Point Numbers to Bytes

    Ok here is some code that i have written w hile back with some help from the support staff. It is designed to take in precision floating point numbers that are stored as 4 bytes and convert then to a decimal value. It works off of a udp input string and then also reformats the string. I have the ability to look at up to 4000 parameters from this one udp string. But now what i want to do is do the opposite of what i have written, and also perhaps get rid of the matlab i used in it as well. What i would like to be able to do is input a decimal value and then have it converted in to the 4 byte groupings that make up this decimal nd then have it inputed back in to a single long string witht hat grouping of bytes in the right order. A better explanation of what was done can be found on this website
    http://www.jefflewis.net/XPlaneUDP_8.html
    as the original code followed the "Single Precision Floating Point Numbers and Bytes" example on that site but what i want to do is "Going from Single Precision Floating Point Numbers to Bytes". The site also explains the udp string that is being represented. Also attached is the original code that i am trying to simply reverse.
    Attachments:
    x-plane_udp_master.vi ‏34 KB

    Perhaps what you are doing is an exercise in the programming of the math conversion of the bytes.
    But if you are just interested in getting the conversion done, why not use the typecast function?
    If the bytes happen to be in the wrong order for wherever you need to send the string, then you can use string functions to rearrange them.
    Message Edited by Ravens Fan on 10-02-2007 08:50 PM
    Attachments:
    Example_BD.png ‏3 KB

  • Floating point numbers into XML file

    Hi,
    I am a learner in Labview and I am using Labview 8.5 version.
    when I use FlattentoXML component to convert floating point numbers ( having more than 5 decimal points) into XML, the output contains alway 5 decimal points.
    But, I want exact decimal number to be displayed as XML. i.e 0.263746 should not be displayed as 0.26375 in XML.
    Do you have any suggestions ?
    Attachments:
    Float_to_XML.vi ‏7 KB

    I tested it and could not see your problem in Labview 2009. So it is perhaps a bug in your Labview version. You can use this VI as a workaround.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Float_to_XML[2].vi ‏8 KB

  • Precision operation (Float point) on FPGA 2011

    Dear Experts....
    For my application I have to perform demodulation operation on FPGA. I want to store an array of double precision number. When I am trying to perform any double precision number operation I am getting this error "Wire:Type not supported in current target" From forums I came to know that on FPGA  in Labview 2011 I cannot have double precision operation. What is alternative?? Please help me with this. Due to this issue my work has been delayed due to this problem..
    Thanks... Kindly guide...
    Solved!
    Go to Solution.

    Dear Mathan, thanks for your reply.... I have already gone throught the link you sent, but for my application I have to have array of floating points number. I can not have integer numbers. Here I have attached my vi. I have to mix a signal of 20 Mhz with sin and cos signal to achieve demodulation. So for Sin and Cos values I have to have floating point. Is ther any way to overcome this problem?
    Attachments:
    fpga.vi ‏30 KB

  • Port of Giac [Longfloat] Library to HP Prime allowing [Variable Precision] Floating Point Arithmetic

    HP Prime CAS is based on Giac, but [ misses ] some of its Special Purpose Libraries like the Giac [ Longfloat ] Library, which if [ Ported ] would allow HP Prime to be the First ( handheld ) Calculator to provide [ Variable Precision ] Floating Point Arithmetic routines ( fully integrated at its CAS Kernel level ). HP Prime already have internal calls to [ Longfloat ] library, but resulting in [ Error Messages ], like when selecting more than 14 Digits in [ evalf ] Numerical evaluation, as for example: evalf( 1/7, 14 ) producing 0.142857142857 and evalf( 1/7, 15 ) resulting in "Longfloat library not available Error: Bad Argument Value" The same happens when one tries to Extend the [ Digits ] variable to a value greater than 13, like Digits := 50 which returns Digits := 13 as output ( from any specified value higher than 13 ).  The porting of [ Longfloat ] library to HP Prime, would open many New opportunities in [ handheld ] Numerical Computation, usually available only on Top Level Computer Algebra Systems, like Maple, Mathematica or Maxima, and also on Giac/XCas. Its worth mentioning that Any [ Smartphone ] with Xcas/Giac App installed, can fully explore [ Variable Precision ] Floating Point Arithmetic, on current ARM based architectures, which means that a Port of [ Longfloat ] Library from Giac to HP Prime, although requiring some considerable amount of labor, is Not an impossible task. The Benefits of such Longfloat [ Porting ] to a handheld Calculator like HP Prime, would put it several levels Up on the list of Top current Calculator Features, miles and miles away from competitors like TI Nspire CX CAS and Casio ClassPad II fx-CP 400 ... Even HP 49/50g have third party developed routines with limited Variable Precision floating point support, while such feature is Not fully integrated to their native CAS Kernel. For those who do not see "plenty" reason for a [ Longfloat ] Porting to HP Prime its needless to say that the PRIMARY reason for ANY [ CALCULATOR ] is to CALCULATE ! and besides Symbolic Computation ( already implemented on all contemporaries top calculator models ), Arbitrary / [ Variable Precision ] Floating Point Arithmetic is simply The TOP of the TOP ( of the IceCream ) in [ Numerical ] Computation ! ( and beside Computer Algebra Manipulation routines, one of the Main reasons for the initial development of the major packages like Maple, Mathematica or Maxima ).

    Thanks for the Link to [ HPMuseum.org ] Page with Valuable Details about the Internal Floating Point implementations both on Home and CAS environments of HP Prime. Its interesting to point to the fact that HP 49/50g has a [ Longfloat ] Version 3.93 package implementation ( with the Same Name but Distinct Code from the Giac Library ) available at [ http://www.hpcalc.org/details.php?id=5363 ] Also its worth mentioning [ Wikipedia ] pages on Arbitrary Precision Arithmetic like [ https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic ], [ https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software ] and [ https://en.wikipedia.org/wiki/List_of_computer_algebra_systems ] and the Xcas/Giac project at [ https://en.wikipedia.org/wiki/Xcas#Giac ] and Official Site at [ http://www-fourier.ujf-grenoble.fr/~parisse/giac.html ] It would be a Dream come True when a Fully Integrated Variable Precision Floting Point Arithmetic package where definetively incorporated to HP Prime CAS Kernel, like the Giac [ Longfloat ] Library, allowing the Prime to be the First calculator with such Resource trully incorporated at its [ Kernel ] level ( and not like an optional third party module as the HP 49/50g one, which lacks complete integration with their respective Kernel, since HP 49/50g does not have native support for Longfloats ).

  • Convert Floating Point Decimal to Hex

    In my application I make some calculations using floating point format DBL,and need to write these values to a file in IEEE 754 Floating Point Hex format. Is there any way to do this using LabVIEW?

    Mike,
    Good news. LabVIEW has a function that does exactly what you want. It is well hidden though...
    In the Advanced/Data manipulation palette there is a function called Flatten to String. If you feed this funtion with your DBL precision digital value you get the IEEE-754 hexadecimal floating point representation (64 bit) at the data string terminal (as a text string).
    I attached a simple example that shows how it works.
    Hope this helps. /Mikael Garcia
    Attachments:
    ieee754converter.vi ‏10 KB

  • Floating Point Arithmatic Error

    Hi,
    I know actionscript represents numbers and double precision
    floating point values. I'm having a problem where double arithmatic
    in actionscript doesn't match the results of the same double
    arithmatic in C++ / C#.
    EXAMPLE:
    In C++ / C#:
    double x, y, x1, y1;
    x = 209.4;
    y = 148.8;
    x1 = 203.0;
    y1 = 145.0;
    double ddx = x - x1;
    double ddy = y - y1;
    RESULT
    ddx: 6.4000000000000057
    ddy: 3.8000000000000114
    In Flash ActionScipt 2:
    var x, y, x1, y1;
    x = 209.4;
    y = 148.8;
    x1 = 203.0;
    y1 = 145.0;
    var ddx = x - x1;
    var ddy = y - y1;
    RESULT
    ddx: 6.39999999999992
    ddy: 3.80000000000024
    After researching Flash / Actionscript "var" stores numerical
    values as doubles ( 8 bytes ) just like doubles are stored in C++ /
    C# ( 8 bytes ). Why would there be a difference between the results
    of ddx and ddy? Are there different implementations of double
    floating point math? If so, Is there a way I can mimic the Flash /
    Actionscript version in C++ / C#?
    Any help would be great!
    Thanks!

    Hmmm, so you're saying the actual binary representation is
    the same but they're just displayed differently?

  • Floating Point Representations on SPARC (64-bit architecture)

    Hi Reader,
    I got hold of "Numerical Computation Guide -2005" by Sun while looking for Floating Point representations on 64 bit Architectures. It gives me nice illustrations of Single and Double formats and the solution for endianness with
    two 32-bit words. But it doesn't tell me how it is for 64-bit SPARC or 64-bit x86.
    I might be wrong here, but having all integers and pointers of 64-bit length, do we still need to break the floating point numbers and store them in lower / higher order addresses ??
    or is it as simple as having a Double Format consistent in the bit-pattern across all the architectures (Intel, SPARC, IBMpowerPC, AMD) with 1 + 11 + 52 bit pattern.
    I have tried hard to get hold of a documentation that explains a 64-bit architecture representation of a Floating Point Number. Any suggestion should be very helpful.
    Thanks for reading. Hope you have something useful to write back.
    Regards,
    Regmee

    The representation of floating-point numbers is specified by IEEE standard 754. This standard contains the specifications for single-precision (32-bit), and double-precision (64-bit) floating-point numbers (There is also a quad-precision (128-bit) format as well). OpenSPARC T1 supports both single and double precision numbers, and can support quad-precision numbers through emulation (not in hardware). The fact that this is a 64-bit machine does not affect how the numbers are stored in memory.
    The only thing that affects how the numbers are stored in memory is endianness. SPARC architecture is big-endian, while x86 is little-endian. But a double-precision floating-point numer in a SPARC register looks the same as a double-precision floating-point number in an x86 register.
    formalGuy

  • Serial Communicat​ion(CAN) of Floating Point Numbers

    Hi,
    I have ran into a situation were I need to use floating point numbers(IEEE Floating Standard) when communicating.   How can I send and recieve floating point numbers?  What converstions need to be made?  Is there a good resource on this?
    Thanks,
    Ken

    Hi K.,
    in automotive a lot of fractional values are exchanged via CAN communication, but still the CAN protocol is based on using integer numbers (of variable bit size)…
    We are thinking we need to use single SGL floats ... that require a higher resoultion and precision
    What is the needed resolution and "precision"? SGL transports 23 bits of mantissa: you can easily pack them into an I32 value!
    Lets make an example:
    I have a signal with a value range of 100…1000 and a resolution of 0.125. To send this over CAN I need to use 13 bits and scale the value with a gain of 0.125 and an offset of 100. Values will be send in an integer representation:
    msgdata value
    0 100
    500 162.5
    501 162.625
    1000 225
    5000 725
    7200 1000
     The formula to translate msgdata to value is easy: value := msgdata*gain+offset.
    Another example: the car I test at the moment sends it's current speed as 16 bit integer value with a range of 0…65532. The gain is 0.01 so speed translates to 0.00…655.32 km/h. The values 65533-65535 have special meanings to indicate errors…
    So again: What is your needed resolution and data range?
    Another option: send the SGL as it is: just 4 bytes. Receive those 4 bytes on your PC as I32/U32 and typecast them to SGL…
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • BUG: Large floating point numbers convert to the wrong integer

    Hi,
    When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
    This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
    Cheers,
    Steen
    CLA, CTA, CLED & LabVIEW Champion
    Solved!
    Go to Solution.

    Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
    I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
    I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
    Is there a compliancy issue with EXT?
    While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
    Is there a genuine conversion error from EXT to U64?
    The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
    --> Decimal: 18446744073709549568
    The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
    --> Decimal: 18446744073709550592
    The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
    /Steen
    CLA, CTA, CLED & LabVIEW Champion

  • 128-bit floating point numbers on new AMD quad-core Barcelona?

    There's quite a lot of buzz over at Slashdot about the new AMD quad core chips, announced yesterday:
    http://hardware.slashdot.org/article.pl?sid=07/02/10/0554208
    Much of the excitement is over the "new vector math unit referred to as SSE128", which is integrated into each [?!?] core; Tom Yager, of Infoworld, talks about it here:
    Quad-core Opteron? Nope. Barcelona is the completely redesigned x86, and it’s brilliant
    Now here's my question - does anyone know what the inputs and the outputs of this coprocessor look like? Can it perform arithmetic [or, God forbid, trigonometric] operations [in hardware] on 128-bit quad precision floats? And, if so, will LabVIEW be adding support for it? [Compare here versus here.]
    I found a little bit of marketing-speak blather at AMD about "SSE 128" in this old PDF Powerpoint-ish presentation, from June of 2006:
    http://www.amd.com/us-en/assets/content_type/DownloadableAssets/PhilHesterAMDAnalystDayV2.pdf
    WARNING: PDF DOCUMENT
    Page 13: "Dual 128-bit SSE dataflow, Dual 128-bit loads per cycle"
    Page 14: "128-bit SSE and 128-bit Loads, 128b FADD, 128 bit FMUL, 128b SSE, 128b SSE"
    etc etc etc
    While it's largely just gibberish to me, "FADD" looks like what might be a "floating point adder", and "FMUL" could be a "floating point multiplier", and God forbid that the two "SSE" units might be capable of computing some 128-bit cosines. But I don't know whether that old paper is even applicable to the chip that was released yesterday, and I'm just guessing as to what these things might mean anyway.
    Other than that, though, AMD's main website is strangely quiet about the Barcelona announcement. [Memo to AMD marketing - if you've just released the greatest thing since sliced bread, then you need to publicize the fact that you've just released the greatest thing since sliced bread...]

    I posted a query over at the AMD forums, and here's what I was told.
    I had hoped that e.g. "128b FADD" would be able to do something like the following:
    /* "quad" is a hypothetical 128-bit quad precision  */
    /* floating point number, similar to "long double"  */
    /* in recent versions of C++:                       */
    quad x, y, z;
    x = 1.000000000000000000000000000001;
    y = 1.000000000000000000000000000001;
    /* the hope was that "128b FADD" could perform the  */
    /* following 128-bit addition in hardware:          */
    z = x + y;
    However, the answer I'm getting is that "128b FADD" is just a set of two 64-bit adders running in parallel, which are capable of adding two vectors of 64-bit doubles more or less simultaneously:
    double x[2], y[2], z[2];
    x[0] = 1.000000000000000000000000000001;
    y[0] = 1.000000000000000000000000000001;
    x[1] = 2.000000000000000000000000000222;
    y[1] = 2.000000000000000000000000000222;
    /* Apparently the coordinates of the two "vectors" x & y       */
    /* can be sent to "128b FADD" in parallel, and the following   */
    /* two summations can be computed more or less simultaneously: */
    z[0] = x[0] + y[0];
    z[1] = x[1] + y[1];
    Thus e.g. "128b FADD", working in concert with "128b FMUL", will be able to [more or less] halve the amount of time it takes to compute a dot product of vectors whose coordinates are 64-bit doubles.
    So this "128-bit" circuitry is great if you're doing lots of linear algebra with 64-bit doubles, but it doesn't appear to offer anything in the way of greater precision for people who are interested in precision-sensitive calculations.
    By the way, if you're at all interested in questions of precision sensitivity & round-off error, I'd highly recommend Prof Kahan's page at Cal-Berzerkeley:
    http://www.cs.berkeley.edu/~wkahan/
    PDF DOCUMENT: How JAVA's Floating-Point Hurts Everyone Everywhere
    http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
    PDF DOCUMENT: Matlab's Loss is Nobody's Gain
    http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf

  • The BIG failure of the floating point computation

     The Big failure of the floating point computation .... Mmmm
    You are writing some code .. writing a simple function ... and using type Double for your computation. 
    You are then expecting to get a result from your function that is at least close to the exact value ... 
    Are you right ??
    Let see this from an example.
    In my example, I will approximate the value of pi. To do so, I will inscribe a circle into a polygon, starting with an hexagon, and compute the half perimeter of the polygon. this will give me an approximation for pi.
    Then I will in a loop doubling the number of sides of that polygon. Therefore, each iteration will give me a better approximation.
    I will perform this twice, using the same algorithm and the same equation ... with the only difference that I will write that equation in two different form 
    Since I don't want to throw at you equations and algorithm without explanation, here the idea:
    (I wrote that with Word to make it easier to read)
    ====================
    Simple enough ... 
    It is important to understand that the two forms of the equation are mathematically always equal for a given value of "t" ... Since it is in fact the exact same equation written in two different way.
    Now let put these two equations in code
    Public Class Form1
    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    RichTextBox1.Font = New Font("Consolas", 9)
    RichTextBox2.Font = New Font("Consolas", 9)
    TextBox1.Font = New Font("Consolas", 12)
    TextBox1.TextAlign = HorizontalAlignment.Center
    TextBox1.Text = "3.14159265358979323846264338327..."
    Dim tt As Double
    Dim Pi As Double
    '===============================================================
    'Using First Form of the equation
    '===============================================================
    'start with an hexagon
    tt = 1 / Math.Sqrt(3)
    Pi = 6 * tt
    PrintPi(Pi, 0, RichTextBox1)
    'Now we will double the number of sides of the polygon 25 times
    For n = 1 To 25
    tt = (Math.Sqrt((tt ^ 2) + 1) - 1) / tt
    Pi = 6 * (2 ^ n) * tt
    PrintPi(Pi, n, RichTextBox1)
    Next
    '===============================================================
    'Using Second Form of the equation
    '===============================================================
    'start with an hexagon
    tt = 1 / Math.Sqrt(3)
    Pi = 6 * tt
    PrintPi(Pi, 0, RichTextBox2)
    'Now we will double the number of sides of the polygon 25 times
    For n = 1 To 25
    tt = tt / (Math.Sqrt((tt ^ 2) + 1) + 1)
    Pi = 6 * (2 ^ n) * tt
    PrintPi(Pi, n, RichTextBox2)
    Next
    End Sub
    Private Sub PrintPi(t As Double, n As Integer, RTB As RichTextBox)
    Dim S As String = t.ToString("#.00000000000000")
    RTB.AppendText(S & " " & Format((6 * (2 ^ n)), "#,##0").PadLeft(13) & " sides polygon")
    Dim i As Integer = 0
    While S(i) = TextBox1.Text(i)
    i += 1
    End While
    Dim CS = RTB.GetFirstCharIndexFromLine(RTB.Lines.Count - 1)
    RTB.SelectionStart = CS
    RTB.SelectionLength = i
    RTB.SelectionColor = Color.Red
    RTB.AppendText(vbCrLf)
    End Sub
    End Class
    The results:
      The text box contains the real value of PI.
      The set of results on the left were obtain with the first form of the equation .. on the right with the second form of the equation
      The red digits show the digits that are exact for pi.
    On the right, where we used the second form of the equation, we see that the result converge nicely toward Pi as the number of sides of the polygon increases.
    But on the left, with the first form of the equation, we see the after just a few iterations, the function stop converging and then start diverging from the expected value.
    What is wrong ... did I made an error in the first form of the equation?  
    Well probably not since this first form of the equation is the one you will find in your math book.
    So, what up here ??
    The problem is this: 
         What is happening is that at each iteration when using the first form, I subtract 1 from the radical, This subtraction always gives a result smaller than 1. Since the type double has a fixed number of digits on the left of the decimal
    point, at each iteration I am loosing precision caused by rounding.
      And after only 25 iterations, I have accumulate such a big rounding error that even the digit on the left of the decimal point is wrong.
    When using the second form of the equation, I am adding 1 to the radical, therefore the value grows and I get no lost of precision.
    So, what should we learn from this ?
       Well, ... we should at least remember that when using floating point to compute a formula, even a simple one, as I show here, we should always check the exactitude of the result. There are some functions that a floating point is unable to evaluate.

    I manually (yes, manually) did the computations with calc.exe. It has a higher accuracy. PI after 25 iterations is 3.1415926535897934934541990520762.
    This means tt = 0.000000015604459512183037864437694609544  compared to 0.0000000138636291675699 computed by the code.
    Armin
    Manually ... 
      You did better than Archimedes.   
      He only got to the 96 sides polygon and gave up. Than he said that PI was 3.1427

Maybe you are looking for