Floating Point Decimal Displays

Hello,
   OK - now for a wierd one.  I have a Z-attribute of 0MATERIAL that is FLTP format.  When I go to SE16 and look at /BI0/PMATERIAL I can't see that field because it is FLTP.  Is there any way that I can see it in BW, or do I have to go down to the DB level?
   Thanks - will award points for helpful answer...
Dave Knudson

Hi Dave,
It doesn't matter if one of the attribute of ZMATERIAL is of FLTP format. You should be able to select that field for field selection as well as in the Output.
by default the field selection may not show all the attributes. GOTO >settings>Field for selection and give the filed name.
While the output is displayed by default the output result width is 250 characters change the output width to max 1023 it would display all the characters.
Assign points if useful.
Thanks & Regards,
Namrata

Similar Messages

  • Floating point keyfigure display in infocube

    Hi ALL,
                  i created floating point key figure as a number, i uploaded flat file to cube, values in flat file 4.33 but in infocube output is 4,3300000000000001E+00 coming like this, y not round up this values in infocube. please help me
    Thanks,
    Nandish
    Edited by: nandish017 on Aug 22, 2011 11:53 AM

    Hi,
    you have reload the data in info cube
    Delete records from Cube and PSA
    --> Goto Data source
    -->in Fields Tab, put Data type of that fields DEC Or  CURR (for Currency) (Remove FLTP data type that caue the floating points)
    -->Activate the data source
    -->Preview the data (will show data without Floating point)
    -->Trigger infopackage
    -->Trigger DTP
    Check the record and let me know the result
    Best Regards
    Obaid

  • Convert Floating Point Decimal to Hex

    In my application I make some calculations using floating point format DBL,and need to write these values to a file in IEEE 754 Floating Point Hex format. Is there any way to do this using LabVIEW?

    Mike,
    Good news. LabVIEW has a function that does exactly what you want. It is well hidden though...
    In the Advanced/Data manipulation palette there is a function called Flatten to String. If you feed this funtion with your DBL precision digital value you get the IEEE-754 hexadecimal floating point representation (64 bit) at the data string terminal (as a text string).
    I attached a simple example that shows how it works.
    Hope this helps. /Mikael Garcia
    Attachments:
    ieee754converter.vi ‏10 KB

  • Float / Double Decimal Display

    How do you get a float or double to display a certain amount of decimal places?
    For example, dealing with money, to have it be displayed as $1.30 instead of $1.3

    Use NumberFormat class!
    A simple example:
    NumberFormat currency = NumberFormat.getNumberInstance();
    currency.setMinimumFractionDigits(2);
    currency.setMaximumFractionDigits(2);
    System.out.println(currency.format(AnyValue));

  • Floating point decimal

    Hi,
    How to get standard floating decimal number out of:
    double a = 0.0;
    double ans = (double)a+1.0/(397+658);
    the answer woulld be 9.478672985781991E-4. how to get say, 0.00094787 as double?
    please help.
    many thanks

    Or use Java 5's Formatter class (or one of its related methods), or BigDecimal class (Or some combination of any of the above.)

  • Floating point to binary conversion

    Hi
    I need to convert a floating point decimal number to bits.
    Eg. 0.000532 to be converted to binary(bits).
    How do I do this?

    Now if I convert that decimal number to bits(in
    the usual method of dividing by 2),will that be the
    exact binary representation of the floating point
    decimal number?You have the same bit pattern in both cases. In one it's held in a double and will be interpreted as a floating point number according to the IEEE 754 representation. In the other it's held in a long and will be interpreted according to the two's complement representation. But it's the same bitpattern.
    Note that Long has a toString method which allows you to convert the long to a String. The radix in your case is 2 for binary.

  • How to restrict the decimal place of a floating point number?

    Hi,
    Here is my code:
    public void TwoDecimal(double u){
         String w = Double.toString(u);
         int c = w.length();
         System.out.println(c);
         if (c <= 5)
            double a = Double.parseDouble(w);
            System.out.println(a);
         else
            System.out.println("Invalid input!");
      }I want to show a floating point number which has 2 digits and 2 decimal places, e.g. 45.82, 29.67. This number is input by user and passed as a parameter.
    For those case like the above sample floating point numbers, it can display the proper value of 'c'. e.g. 45.67 will display 5.
    However, when I passed 99999, it will show 7; 9999 will return 6, not 5.
    So, if the user does not input the '.', does it append 2 implicit chars to it? i.e. 99999.0 and 9999.0. So, that's why it returned 7 and 6 for the length of the string respectively.
    How can I fix it?
    and
    Does it has better algorithm?
    Pls advise.
    gogo

    When dealing with a known precision, in your case hundredths, it is often a good idea to use an integer type and add in the decimals on printing only. This is often the case in banking systems. Almost all of them use integer types, (read long) in pennies to store monitary values. Ever seen someone type in a value for a credit card machine? For something like $20 they press.. "2" "0" "0" "0" The machine knows the lowest denomonation in a cent, so it knows where to put the decimal place. I suggest you do something like this. It also helps to avoid base 2 round off errors.
    -Spinoza

  • Floating point display mapping

    To avoid negative values I convert my images to floating point when they are full 16 bit.
    But now i have another problem. De 16 bit display mapping function does not work. I i want to display a range of pixels (ex. grey values 40000 - > 50000) nothing happens. Is there a way to display a floating point image with a function like there is for 16 bit images (given range) without converting the actual pixels.

    This VI does only what the 16 bit display mapping function does (exept that there is an error made in this VI). The included 16 bit image is not full 16 bit (min = 176, max = 4095, only 15 bits are used). In attachment I included for you a full 16 bit image (values from 0 -> 65536) Imaq will read this as values ranging from -32768 -> 32767.
    If you open this Image with the VI you suggested you will get a full black 8 bit image. The original image is a gray gradient from black to white.
    The error in this VI is that the casting to I32 has to be done before you use the max-min function.
    Otherwise you get 32767 -(-32768) = -1 because you have an overflow with I16.
    This VI is not a solution because we have to analyze full 16 bit images for
    there real gray values.
    So if we convert the image to 8 bit we loose information. So if we take a line profile or a histogram it is difficult to use if the image is full 16 bit.
    I can not understand why a signed data type is used for images. Images don't have negative values.
    This is not the first time I mentioned this. I hope that NI understands the problem.
    Attachments:
    16bitgradient.tif ‏520 KB

  • Convertion Hex String to 32bits Decimal floating point??

    Hi,
    I would like to know how to convert hexa like: 416b0ac3 in decimal 32bits floating-point. The result of this string is suppose to be 14.690127.
    So i must be able to do:
    From 32-bit Hexadecimal Representation To Decimal Floating-Point
    Thanks for your support
    RiderMerlin

    RiderMerlin
    You can use the typecast function to do this.
    David
    Message Edited by David Crawford on 09-06-2006 03:31 PM
    Attachments:
    Typecast to Single.jpg ‏6 KB

  • BUG: Large floating point numbers convert to the wrong integer

    Hi,
    When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
    This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
    Cheers,
    Steen
    CLA, CTA, CLED & LabVIEW Champion
    Solved!
    Go to Solution.

    Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
    I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
    I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
    Is there a compliancy issue with EXT?
    While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
    Is there a genuine conversion error from EXT to U64?
    The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
    --> Decimal: 18446744073709549568
    The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
    --> Decimal: 18446744073709550592
    The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
    /Steen
    CLA, CTA, CLED & LabVIEW Champion

  • Floating point numbers into XML file

    Hi,
    I am a learner in Labview and I am using Labview 8.5 version.
    when I use FlattentoXML component to convert floating point numbers ( having more than 5 decimal points) into XML, the output contains alway 5 decimal points.
    But, I want exact decimal number to be displayed as XML. i.e 0.263746 should not be displayed as 0.26375 in XML.
    Do you have any suggestions ?
    Attachments:
    Float_to_XML.vi ‏7 KB

    I tested it and could not see your problem in Labview 2009. So it is perhaps a bug in your Labview version. You can use this VI as a workaround.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Float_to_XML[2].vi ‏8 KB

  • Floating point question

    I am trying to create a price attribute, so I need to places after the decimal point. I how do I set that when I create a table?

    Example from Help section in Oracle SQL Developer:
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the OLAP and Data Mining options
    SQL> set serverout on
    SQL> DECLARE  -- Declare variables here.
      2    monthly_salary         NUMBER(6);  -- This is the monthly salary.
      3    number_of_days_worked  NUMBER(2);  -- This is the days in one month.
      4    pay_per_day            NUMBER(6,2); -- Calculate this value.
      5  BEGIN
      6  -- First assign values to the variables.
      7    monthly_salary := 2290;
      8    number_of_days_worked := 21;
      9
    10  -- Now calculate the value on the following line.
    11    pay_per_day := monthly_salary/number_of_days_worked;
    12
    13  -- the following displays output from the PL/SQL block
    14    DBMS_OUTPUT.PUT_LINE('The pay per day is ' || TO_CHAR(pay_per_day));
    15
    16  EXCEPTION
    17  /* This is a simple example of an exeception handler to trap division by zero.
    18     In actual practice, it would be best to check whether a variable is
    19     zero before using it as a divisor. */
    20    WHEN ZERO_DIVIDE THEN
    21        pay_per_day := 0; -- set to 0 if divisor equals 0
    22  END;
    23  /
    The pay per day is 109.05
    PL/SQL procedure successfully completed.
    SQL>It's not create table statement which you need but one can see in this example answer on your " Floating point question".
    HTH
    Message was edited by:
    Faust

  • BigDecimal vs floating points...

    Hi all,
    I know its probably been asked amillion times before but I need to finally fully understand and get my head around the two.
    Firstly here are some bits I've been told by different people and read in different places (alot of people seem to think differently which is what confuses me):
    - I've read that if you are wanting precision for currency for example that floating point shouldnt be used because of its accuracy down to the fact it cant represent every decimal number.
    - The some people have told me that it doesnt matter and theres not much point ,ost the time in BigDecimal all you need to do is correct the floating point with formatting.
    - I've asked about this before but people just seem to give me a short answer to it but without actually explaining why or where they get it from, you cant just assume an answer based on nothing...
    I'm building some engineering software that has a general accuracy of 3 decmial places (millimeters from meters) and my first thought is that if currency at 2 decimal places requires BigDecimal then I surely require it (I cant afford to be missing off mm for every calculation, theres alot!) but the problem is this has resulted in me building pretty much the whole application with BigDecimal which you can probably imagine brings up thoughts about performance and memory uptake, I do calculations with BigDecimal, store data in BigDecimal and infact the only thing I do in double is the graphical display as the accuracy isnt so important.
    My last question is if this is an ok way to build an accurate application it makes me start to wonder why is floating points used more than BigDecimals, surely most numbers are required to be accurate in applications especially of an enterprise scale?
    Thanks,
    Ken

    MarksmanKen wrote:
    So your a big user of BigDecimal as well then? Thats good to know someone else thinks in similar ways, I was starting to feel like abit of an idiot for using them so extensively lolNot at all. The idiots are the people who use primitives rather than BigDecimal "because they're faster" even though they've never actually experienced any performance problems. Of course, there are lots of cases where the speed of a primitive is preferable, but on the whole those guys know perfectly well who they are and what they're doing.
    My program is very calculation heavy and I've not had any real performance issues yet but I was wondering if the performance gain would be significant enough while keeping the accuracy.Testing will show you the way. Don't let any "we tested this calculation a million times using primitives and the same one using BigDecimal, and it showed a remarkable 3 seconds quicker using primitives" sidetrack you, either. All that matters is that your actual production code is performant enough for your application. Generally speaking, anything involving currency will probably be better using BigDecimal, or, really, a Money class which happens to use BigDecimal under the covers. Quite why enterprise-targeted languages don't have some sort of native Money or Currency class out-of-the-box remains a mystery, to be honest.

  • Multiply floating point rounds

    Why is it that when I use the Multiply function with two floating point numbers, that it rounds off the result?
    I have the Format/Precision set to 3 decimals on the indicator.
    The inputs are both doubles.
    Using a probe before the indicator shows that the result is rounded by the mulitply function itself.
    Attachments:
    Multiply Rounding.vi ‏7 KB

    Wes_OH wrote:
    I have the Format/Precision set to 3 decimals on the indicator.
    NO!
    Your indicator is set to six digits of precision, thus shows only six significant digits and that's exactly what you get (324975). If you would set ot to 3 signiificant digits, it would display as (325000).
    If you want to show a certain number of digits after the decimal point, you need to set the "precision type" to "Digits of Precision", not "significant digits", as you have it set now. Try it!
    Also don't trust any probes or indicators, they never "round", i.e. never change the data. The displayed precision is just cosmetic and does NOT change the underlying data that is carried in the wire, which is always full precision. If you want a probe with 10 decimal digits, create a custom probe.
    If you want to round, you need to do it in code.
    Message Edited by altenbach on 02-05-2007 08:37 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    digits.png ‏25 KB

  • SQL Loader and Floating Point Numbers

    Hi
    I have a problem loading floating point numbers using SQL Loader. If the number has more than 8 significant digits SQL Loader rounds the number i.e. 1100000.69 becomes 1100000.7. The CTL file looks as follows
    LOAD DATA
    INFILE '../data/test.csv' "str X'0A'"
    BADFILE '../bad/test.bad'
    APPEND
    INTO TABLE test
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    Amount CHAR
    and the data file as follows
    "100.15 "
    "100100.57 "
    "1100000.69 "
    "-2000000.33"
    "-100000.43 "
    the table defined as follows
    CREATE TABLE test
    Amount number(15,4)
    ) TABLESPACE NNUT050M1;
    after loading a select returns the following
    100.15
    100100.57
    1100000.7
    -2000000
    -100000.4
    Thanks in advance
    Russell

    Actually if you format the field to display as (say) 999,999,999.99, you will see the correct numbers loaded via SQL Loader.
    null

Maybe you are looking for