Floating point decimal
Hi,
How to get standard floating decimal number out of:
double a = 0.0;
double ans = (double)a+1.0/(397+658);
the answer woulld be 9.478672985781991E-4. how to get say, 0.00094787 as double?
please help.
many thanks
Or use Java 5's Formatter class (or one of its related methods), or BigDecimal class (Or some combination of any of the above.)
Similar Messages
-
Convert Floating Point Decimal to Hex
In my application I make some calculations using floating point format DBL,and need to write these values to a file in IEEE 754 Floating Point Hex format. Is there any way to do this using LabVIEW?
Mike,
Good news. LabVIEW has a function that does exactly what you want. It is well hidden though...
In the Advanced/Data manipulation palette there is a function called Flatten to String. If you feed this funtion with your DBL precision digital value you get the IEEE-754 hexadecimal floating point representation (64 bit) at the data string terminal (as a text string).
I attached a simple example that shows how it works.
Hope this helps. /Mikael Garcia
Attachments:
ieee754converter.vi 10 KB -
Floating Point Decimal Displays
Hello,
OK - now for a wierd one. I have a Z-attribute of 0MATERIAL that is FLTP format. When I go to SE16 and look at /BI0/PMATERIAL I can't see that field because it is FLTP. Is there any way that I can see it in BW, or do I have to go down to the DB level?
Thanks - will award points for helpful answer...
Dave KnudsonHi Dave,
It doesn't matter if one of the attribute of ZMATERIAL is of FLTP format. You should be able to select that field for field selection as well as in the Output.
by default the field selection may not show all the attributes. GOTO >settings>Field for selection and give the filed name.
While the output is displayed by default the output result width is 250 characters change the output width to max 1023 it would display all the characters.
Assign points if useful.
Thanks & Regards,
Namrata -
Floating point to binary conversion
Hi
I need to convert a floating point decimal number to bits.
Eg. 0.000532 to be converted to binary(bits).
How do I do this?Now if I convert that decimal number to bits(in
the usual method of dividing by 2),will that be the
exact binary representation of the floating point
decimal number?You have the same bit pattern in both cases. In one it's held in a double and will be interpreted as a floating point number according to the IEEE 754 representation. In the other it's held in a long and will be interpreted according to the two's complement representation. But it's the same bitpattern.
Note that Long has a toString method which allows you to convert the long to a String. The radix in your case is 2 for binary. -
How to restrict the decimal place of a floating point number?
Hi,
Here is my code:
public void TwoDecimal(double u){
String w = Double.toString(u);
int c = w.length();
System.out.println(c);
if (c <= 5)
double a = Double.parseDouble(w);
System.out.println(a);
else
System.out.println("Invalid input!");
}I want to show a floating point number which has 2 digits and 2 decimal places, e.g. 45.82, 29.67. This number is input by user and passed as a parameter.
For those case like the above sample floating point numbers, it can display the proper value of 'c'. e.g. 45.67 will display 5.
However, when I passed 99999, it will show 7; 9999 will return 6, not 5.
So, if the user does not input the '.', does it append 2 implicit chars to it? i.e. 99999.0 and 9999.0. So, that's why it returned 7 and 6 for the length of the string respectively.
How can I fix it?
and
Does it has better algorithm?
Pls advise.
gogoWhen dealing with a known precision, in your case hundredths, it is often a good idea to use an integer type and add in the decimals on printing only. This is often the case in banking systems. Almost all of them use integer types, (read long) in pennies to store monitary values. Ever seen someone type in a value for a credit card machine? For something like $20 they press.. "2" "0" "0" "0" The machine knows the lowest denomonation in a cent, so it knows where to put the decimal place. I suggest you do something like this. It also helps to avoid base 2 round off errors.
-Spinoza -
Convertion Hex String to 32bits Decimal floating point??
Hi,
I would like to know how to convert hexa like: 416b0ac3 in decimal 32bits floating-point. The result of this string is suppose to be 14.690127.
So i must be able to do:
From 32-bit Hexadecimal Representation To Decimal Floating-Point
Thanks for your support
RiderMerlinRiderMerlin
You can use the typecast function to do this.
David
Message Edited by David Crawford on 09-06-2006 03:31 PM
Attachments:
Typecast to Single.jpg 6 KB -
Hi everybody,
This line:
System.out.println((0.1+0.7)*10);outputs 7.999999999999999
This is due to how floating point numbers are stored. When writing
a code, sometimes it behaves in an intended way, sometimes it doesn't
(like the one above). Is there a way to "predict" when the code is ok and
when isn't ? Are there any tips to be aware of to get around that kind
of problems ?
Cheers,
AdrianNo. Using BigDecimal just because you don't understand how floating-point numbers work would be... um... short-sighted. And it wouldn't help, either. As soon as you divide 1 by 3 then you have to know how decimal numbers work, which is essentially the same problem.
Edit: I forgot the forum hasn't been automated to provide the mandatory link for people who ask this question. We still have to do it by hand.
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Edited by: DrClap on Oct 11, 2007 3:02 PM -
Floating point precision of "Flatten to XML"
It appears that the "Flatten to XML" function (LV 7.1.1) truncates floating point numbers to 5 decimal places. This is a rather annoying limitation, since I'm trying to store a relative time in hours, accurate to the second (chosen by a previous coder that I have to be compatible with - otherwise I'd just use seconds). Is there a workaround to this? (other than multiplying by some power of 10 before flattening, and dividing after unflattening)
JaegenHi Paul and Jaegen,
I checked our databases and found entries of product suggestions and
corrective action requests for the behavior of the limited precision
when flattening to XML. I found an interesting reply from a LabVIEW
developer on the request for further precision:
The Flatten To XML primitive puposefully cuts off all numbers at 5
digits after the decimal. There are 3 main reasons for this:
Information regarding precision is not propagated on the wire.
Therefore, there is no real way to know how many significant digits or
even places past the decimal point is appropriate when data is
flattened to XML.
Bloat. If all floating point values printed all of the possible
decimal digits all of the time, this would provide for some very large
blocks of XML code.
Given the arbitrarily complex nature of LabVIEW data, it is
difficult to provide a method for specifying precision. For example, if
a user has a cluster of clusters, each of which contain a single,
double and extended representing various measurements of differing
accuracy, how can one precision setting be applied to each of these
values? The user would have to unbundle (and index if an array was
involved), flatten, concatenate, and then the reverse on the unflatten
side.
I suggest that you go ahead and file a new product suggestion by using the "feedback" link on www.ni.com/contact.
It would be best if you could give some detailed information on how you
would like LabVIEW to handle different scenarios while getting around
the above issues.
Thanks for the feedback!
- Philip Courtois, Thinkbot Solutions -
i understand that floating point numbers are represented in java at binary numbers..
and that there are decimal values that cannot exactly representable in binary..
one of that i think is 0.1
but why is it that 0.1 can be represented exactly but in the computation it cannot.
i have this program..
public class CTest
public static void main(String[] args)
double d = 1.0;
double d2 = 0.9;
double d3 = 0.1;
System.out.println(""+(d-d2));
System.out.println(""+d3);
}please help..
thanksI mean this one..
class D {
public static void main(String[] args){
double d = 0.1;
double d2 = 1.0 - 0.9;
System.out.println("d: " + d);
System.out.println("d: " + d2);
[\code] -
Precision loss - conversions between exact values and floating point values
Hi!
I read this in your SQL Reference manual, but I don't quite get it.
Conversions between exact numeric values (TT_TINYINT, TT_SMALLINT, TT_INTEGER, TT_BIGINT, NUMBER) and floating-point values (BINARY_FLOAT, BINARY_DOUBLE) can be inexact because the exact numeric values use decimal precision whereas the floating-point numbers use binary precision.
Could you please give two examples: one where a TT_TINYINT is converted to a BINARY_DOUBLE and one when a TT_BIGINT is converted into a DOUBLE, both cases give examples on lost precision? This would be very helpful.
Thanks!
Sunechokpa wrote:
Public Example (float... values){}
new Example (1, 1e2, 3.0, 4.754);It accepts it if I just use 1,2,3,4 as the values being passed in, but doesn't like it if I use actual float values.Those are double literals, try
new Example (1f, 1e2f, 3.0f, 4.754f); -
How does Java store floating point numbers?
Hello
I'm writing a paper about floating point numbers in which I compare an IEEE-754 compatible language [c] with Java. I read that Java can do a conversion decimal->binary->decimal and retain the same value whereas c can't. I found several documents discussing the pros and cons of that but I can't find any information about how it is implemented.
I hope someone can explain it to me, or post a link to a site explaining it.
Cheers
HuttuSo it is a myth.
I still ask because I observed a oddity: When I store 1.4 in c and printf( %2.20f\n",a); it I get 1.39999999999999991118. If I do the same in Java with System.out.printf( %2.20f\n",a); I get 1.4. If I multiply the variable with itself I get 1.95999999999999970000:
double a=1.4;
a=a*a;
System.out.printf( %2.20f\n",a);
{code}
Does this happen because of the rounding in Java? -
BUG: Large floating point numbers convert to the wrong integer
Hi,
When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
Cheers,
Steen
CLA, CTA, CLED & LabVIEW Champion
Solved!
Go to Solution.Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
Is there a compliancy issue with EXT?
While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
Is there a genuine conversion error from EXT to U64?
The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
EXT-exponent: 0x100000000111110b
EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
--> Decimal: 18446744073709549568
The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
EXT-exponent: 0x100000000111110b
EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
--> Decimal: 18446744073709550592
The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
/Steen
CLA, CTA, CLED & LabVIEW Champion -
Separator for floating point numbers
Hello,
I work with oracle 9 and have a problem with the entry of floating point numbers.
The separator for floating point numbers in my data is a point (5.60).
The pre-setting of oracle is a comma (5,60).
By inserting I get the error message:
01722. 00000 - "invalid number"
How can i change this setting
Thanks for Help
F.Hi,
I'm not sure if I understood your problem, however the NLS_NUMERIC_CHARACTERS variable specifies the characters to use as the group separator and decimal character.
SQL> create table t1 (val number);
Table created.
SQL> select * from nls_session_parameters;
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY BRAZIL
NLS_CURRENCY R$
NLS_ISO_CURRENCY BRAZIL
NLS_NUMERIC_CHARACTERS ,.
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD/MM/YYYY
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH24:MI:SSXFF
NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
NLS_DUAL_CURRENCY Cr$
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
17 rows selected.
SQL> insert into t1 values (1.50);
1 row created.
SQL> select * from t1;
VAL
1,5
SQL> alter session set nls_numeric_characters='.,';
Session altered.
SQL> select * from t1;
VAL
1.5Cheers
Legatti -
Single Precision Floating Point Numbers to Bytes
Ok here is some code that i have written w hile back with some help from the support staff. It is designed to take in precision floating point numbers that are stored as 4 bytes and convert then to a decimal value. It works off of a udp input string and then also reformats the string. I have the ability to look at up to 4000 parameters from this one udp string. But now what i want to do is do the opposite of what i have written, and also perhaps get rid of the matlab i used in it as well. What i would like to be able to do is input a decimal value and then have it converted in to the 4 byte groupings that make up this decimal nd then have it inputed back in to a single long string witht hat grouping of bytes in the right order. A better explanation of what was done can be found on this website
http://www.jefflewis.net/XPlaneUDP_8.html
as the original code followed the "Single Precision Floating Point Numbers and Bytes" example on that site but what i want to do is "Going from Single Precision Floating Point Numbers to Bytes". The site also explains the udp string that is being represented. Also attached is the original code that i am trying to simply reverse.
Attachments:
x-plane_udp_master.vi 34 KBPerhaps what you are doing is an exercise in the programming of the math conversion of the bytes.
But if you are just interested in getting the conversion done, why not use the typecast function?
If the bytes happen to be in the wrong order for wherever you need to send the string, then you can use string functions to rearrange them.
Message Edited by Ravens Fan on 10-02-2007 08:50 PM
Attachments:
Example_BD.png 3 KB -
Floating point numbers into XML file
Hi,
I am a learner in Labview and I am using Labview 8.5 version.
when I use FlattentoXML component to convert floating point numbers ( having more than 5 decimal points) into XML, the output contains alway 5 decimal points.
But, I want exact decimal number to be displayed as XML. i.e 0.263746 should not be displayed as 0.26375 in XML.
Do you have any suggestions ?
Attachments:
Float_to_XML.vi 7 KBI tested it and could not see your problem in Labview 2009. So it is perhaps a bug in your Labview version. You can use this VI as a workaround.
Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
Attachments:
Float_to_XML[2].vi 8 KB
Maybe you are looking for
-
I cannot get my iPhones email up it keeps saying that my username or password is incorrect but I know it's correct it goes into my iPad and my iPad mini what is wrong with my iPhone
-
How can I speed up the connection to the database?
I am running a JSP application with regular JavaBeans not EJB and connecting to the database through these beans. The connection is set to stay in memory as long as the bean is in session. However when I am running more than 7 sql statements seperatl
-
After saving GRN part1 tab not get enabled
Hi, All excise details get reflected in GRN from PO , i have entered all details such as excise number, Excise date etc related all data excise. But after saving GRN part 1 tab not got enabled which contains serial number , part1 number . which is no
-
Edit in.....
The "Edit In..." function doesn't wok when I choose Photoshop CS5 and select the "Edit a copy with Lightroom adjustments". Why?
-
Im confused about Flex and Flash?
flex uses Actionscript to develop internet application. Flash also uses Actionscript to do that. now why i should use Flex if both using the same language. is it because Flex has more class libraries ? when a company or someone asks me why we need to