Problem Transall Oracle, get rounded, floating point numbers

I use Transall Oracle. and i have a problem with this function:
Format$(1234.56,"#,##0;(#,##0)")
The doc Transall, say that the result is 1,235 or in the case of a
negative number, (1,235).
It is incorrect, because the software transall, dont get rounded, and process *1,234*.
It is a bug ??? I need to work with round number. , is there another option?

Hi,
I am unable to reproduce the problem that you are describing. Perhaps this was a problem that was fixed in a newer release of Transall than you have. For instance, when I compile and run the code:
Public Sub Run()
Dim Result As String
Result = Format$( 1234.56, "#,##0;(#,##0)")
*WriteConStdOut( "Result is " & Result & CRLF)*
End Sub()
the screen shows:
Result is 1,235
If you are only interested in obtaining a rounded value, the Transall documentation lists three different rounding functions: Round(), Round45(), and Round45A(), with Round45() rounding in the manner that most of us commonly think of as rounding.

Similar Messages

  • Separator for floating point numbers

    Hello,
    I work with oracle 9 and have a problem with the entry of floating point numbers.
    The separator for floating point numbers in my data is a point (5.60).
    The pre-setting of oracle is a comma (5,60).
    By inserting I get the error message:
    01722. 00000 - "invalid number"
    How can i change this setting
    Thanks for Help
    F.

    Hi,
    I'm not sure if I understood your problem, however the NLS_NUMERIC_CHARACTERS variable specifies the characters to use as the group separator and decimal character.
    SQL> create table t1 (val number);
    Table created.
    SQL> select * from nls_session_parameters;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  BRAZIL
    NLS_CURRENCY                   R$
    NLS_ISO_CURRENCY               BRAZIL
    NLS_NUMERIC_CHARACTERS ,.
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD/MM/YYYY
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT           DD/MM/RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT             HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD/MM/RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY              Cr$
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    17 rows selected.
    SQL> insert into t1 values (1.50);
    1 row created.
    SQL> select * from t1;
           VAL
           1,5
    SQL> alter session set nls_numeric_characters='.,';
    Session altered.
    SQL> select * from t1;
           VAL
           1.5Cheers
    Legatti

  • SQL Loader and Floating Point Numbers

    Hi
    I have a problem loading floating point numbers using SQL Loader. If the number has more than 8 significant digits SQL Loader rounds the number i.e. 1100000.69 becomes 1100000.7. The CTL file looks as follows
    LOAD DATA
    INFILE '../data/test.csv' "str X'0A'"
    BADFILE '../bad/test.bad'
    APPEND
    INTO TABLE test
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    Amount CHAR
    and the data file as follows
    "100.15 "
    "100100.57 "
    "1100000.69 "
    "-2000000.33"
    "-100000.43 "
    the table defined as follows
    CREATE TABLE test
    Amount number(15,4)
    ) TABLESPACE NNUT050M1;
    after loading a select returns the following
    100.15
    100100.57
    1100000.7
    -2000000
    -100000.4
    Thanks in advance
    Russell

    Actually if you format the field to display as (say) 999,999,999.99, you will see the correct numbers loaded via SQL Loader.
    null

  • How does Java store floating point numbers?

    Hello
    I'm writing a paper about floating point numbers in which I compare an IEEE-754 compatible language [c] with Java. I read that Java can do a conversion decimal->binary->decimal and retain the same value whereas c can't. I found several documents discussing the pros and cons of that but I can't find any information about how it is implemented.
    I hope someone can explain it to me, or post a link to a site explaining it.
    Cheers
    Huttu

    So it is a myth.
    I still ask because I observed a oddity: When I store 1.4 in c and printf( %2.20f\n",a); it I get 1.39999999999999991118. If I do the same in Java with System.out.printf( %2.20f\n",a); I get 1.4. If I multiply the variable with itself I get 1.95999999999999970000:
    double a=1.4;
    a=a*a;
    System.out.printf( %2.20f\n",a);
    {code}
    Does this happen because of the rounding in Java?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Single Precision Floating Point Numbers to Bytes

    Ok here is some code that i have written w hile back with some help from the support staff. It is designed to take in precision floating point numbers that are stored as 4 bytes and convert then to a decimal value. It works off of a udp input string and then also reformats the string. I have the ability to look at up to 4000 parameters from this one udp string. But now what i want to do is do the opposite of what i have written, and also perhaps get rid of the matlab i used in it as well. What i would like to be able to do is input a decimal value and then have it converted in to the 4 byte groupings that make up this decimal nd then have it inputed back in to a single long string witht hat grouping of bytes in the right order. A better explanation of what was done can be found on this website
    http://www.jefflewis.net/XPlaneUDP_8.html
    as the original code followed the "Single Precision Floating Point Numbers and Bytes" example on that site but what i want to do is "Going from Single Precision Floating Point Numbers to Bytes". The site also explains the udp string that is being represented. Also attached is the original code that i am trying to simply reverse.
    Attachments:
    x-plane_udp_master.vi ‏34 KB

    Perhaps what you are doing is an exercise in the programming of the math conversion of the bytes.
    But if you are just interested in getting the conversion done, why not use the typecast function?
    If the bytes happen to be in the wrong order for wherever you need to send the string, then you can use string functions to rearrange them.
    Message Edited by Ravens Fan on 10-02-2007 08:50 PM
    Attachments:
    Example_BD.png ‏3 KB

  • Floating point numbers into XML file

    Hi,
    I am a learner in Labview and I am using Labview 8.5 version.
    when I use FlattentoXML component to convert floating point numbers ( having more than 5 decimal points) into XML, the output contains alway 5 decimal points.
    But, I want exact decimal number to be displayed as XML. i.e 0.263746 should not be displayed as 0.26375 in XML.
    Do you have any suggestions ?
    Attachments:
    Float_to_XML.vi ‏7 KB

    I tested it and could not see your problem in Labview 2009. So it is perhaps a bug in your Labview version. You can use this VI as a workaround.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    Float_to_XML[2].vi ‏8 KB

  • IEEE 754 standards for representing floating point numbers

    HI All..
    Most of us are not awared how the actually Floating point numbers are represented . IEEE have set standards as how should we represent floating point numbers.
    I am giving u the link with which u can know how actually these are represented.
    http://en.wikipedia.org/wiki/IEEE_754
    If u have any doubts u can always reach me.
    Bye
    Happy learning
    [email protected]

    A noble but misguided attempt at dispelling the recurring problems the programmers have over and over again. There have been repeated posts to links about the IEEE standard, to little or no avail. The newbies who run into the problems will continue to do so, without regard to yet another post about it here.

  • How to gather a set of floating point numbers from a web page form?

    I am pretty new to Java and I am working on a project to apply Benfords law to find the probability of digits submitted by the user. My first step is to gather set of floating point numbers from a web page. How do I go about doing this? Any suggestion or a proper site where I can learn this stuff will be highly appreciated.

    I am using NetBeans IDE 5.5.1 and for this project. I have realized that the first question was not well phrased.
    I created a web project with 2 jsp files and a class file in it. When my jsp file runs I ask the user to enter a number for finding the probablility based on Benfords law.
    This is what I got so far:
    This is input.jsp
    <h1></h1>
    Please enter a number to be checked
    <form action="result.jsp" method="post">
    <input type="number" name="number" >
    <input type="submit" value="Check number">
    </form>
    </body>
    </html>
    This is result.jsp
    String number=request.getParameter("number");

  • OS X Mavericks Contacts phone numbers shown as Floating point numbers

    After installing Mavericks on MBP, I notice that some phone numbers in contacts are shown as floating point numbers, and are unreadable. I am guessing that these are numbers which were proceeded by "+" previously.
    An example is 4.9161E+12, which is for a German contact, and the number would be +49 161...
    I can't see any way to change the format, and the floating point display is useless, as the last digits are lost.
    Would appreciate any advice on the matter.
    Many thanks,
    Michael.

    Music and pics are one way sync - computer to iphone. The iphone is not a storage device.
    Only itunes purchased music can be transfered. With iphone attached to itunes, click file>transfer purchases.
    Pics are optimized for viewing on iphone, reducing the quality of the pic on the iphone. Only pics taken with iphone and in the Camera Roll can be imported from iphone. This is done as with any other digital camera. You can e-mail the other pics to yourself one at a time. They will not be of the original quality though.
    You really should back up your info/pics/music as hard drives can and do fail.

  • Serial Communicat​ion(CAN) of Floating Point Numbers

    Hi,
    I have ran into a situation were I need to use floating point numbers(IEEE Floating Standard) when communicating.   How can I send and recieve floating point numbers?  What converstions need to be made?  Is there a good resource on this?
    Thanks,
    Ken

    Hi K.,
    in automotive a lot of fractional values are exchanged via CAN communication, but still the CAN protocol is based on using integer numbers (of variable bit size)…
    We are thinking we need to use single SGL floats ... that require a higher resoultion and precision
    What is the needed resolution and "precision"? SGL transports 23 bits of mantissa: you can easily pack them into an I32 value!
    Lets make an example:
    I have a signal with a value range of 100…1000 and a resolution of 0.125. To send this over CAN I need to use 13 bits and scale the value with a gain of 0.125 and an offset of 100. Values will be send in an integer representation:
    msgdata value
    0 100
    500 162.5
    501 162.625
    1000 225
    5000 725
    7200 1000
     The formula to translate msgdata to value is easy: value := msgdata*gain+offset.
    Another example: the car I test at the moment sends it's current speed as 16 bit integer value with a range of 0…65532. The gain is 0.01 so speed translates to 0.00…655.32 km/h. The values 65533-65535 have special meanings to indicate errors…
    So again: What is your needed resolution and data range?
    Another option: send the SGL as it is: just 4 bytes. Receive those 4 bytes on your PC as I32/U32 and typecast them to SGL…
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Floating-point numbers: min value

    Hi,
    the number wrapper classes each define a MAX_VALUE and a MIN_VALUE constant. While the MIN_VALUE of non-floating point numbers are negative numbers, the MIN_VALUE of the floating point numbers are the smallest postive numbers.
    From the Javadoc:
    Float:
    MAX_VALUE:
    A constant holding the largest positive finite value of type float, (2-2-23)�2127. It is equal to the hexadecimal floating-point literal 0x1.fffffeP+127f and also equal to Float.intBitsToFloat(0x7f7fffff).
    MIN_VALUE:
    A constant holding the smallest positive nonzero value of type float, 2-149. It is equal to the hexadecimal floating-point literal 0x0.000002P-126f and also equal to Float.intBitsToFloat(0x1).
    MAX_VALUE:
    A constant holding the largest positive finite value of type double, (2-2-52)�21023. It is equal to the hexadecimal floating-point literal 0x1.fffffffffffffP+1023 and also equal to Double.longBitsToDouble(0x7fefffffffffffffL).
    MIN_VALUE:
    A constant holding the smallest positive nonzero value of type double, 2-1074. It is equal to the hexadecimal floating-point literal 0x0.0000000000001P-1022 and also equal to Double.longBitsToDouble(0x1L).
    Can someone tell me the MAX_NEGATIVE_VALUE (the finite negative value with the largest absolute value) and the MIN_NEGATIVE_VALUE (the negative value with the smallest nonzero absolute value) of float and double (using the xxxBitsToXxx-methods)?
    Thanks!
    -Puce

    The IEEE754 format is symmetric with respect to the sign, so -MAX_VALUE
    and -MIN_VALUE are the values you're looking for.
    kind regards,
    Jos

  • BUG: Large floating point numbers convert to the wrong integer

    Hi,
    When using the conversion "bullets" to convert SGL, DBL and EXT to integers there are some values which convert wrong. One example is the integer 9223370937343148030, which can be represented exactly as a SGL (and thus exactly as DBL and EXT as well). If you convert this to I64 you get 9223370937343148032 instead, even though the correct integer is within the range of an I64. There are many similar cases, all (I've noticed) within the large end of the ranges.
    This has nothing to do with which integers can be represented exactly as a floating point value or not. This is a genuine conversion bug mind you.
    Cheers,
    Steen
    CLA, CTA, CLED & LabVIEW Champion
    Solved!
    Go to Solution.

    Yes, I understand the implications involved, and there definetely is a limit to how many significant digits that can be displayed in the numeric controls and constants today. I think that either this limit should be lifted or a cap should be put onto the configuration page when setting the display format.
    I ran into this problem as I'm developing a new toolset that lets you convert all the numeric formats into any other numeric format, just like the current "conversion bullets". My conversion bullets have outputs for overflow and exact conversion as well, since I need that functionality myself for a Math toolset (GPMath) I'm also developing. Eventually I'll maybe include underflow as well, but for now just those two outputs are available. Example:
    I do of course pay close attention to the binary representation of the numbers to calculate the Exact conversion? output correctly for each conversion variation (there are hundreds of VIs in polymorphic wrappers), but I relied in some cases on the ability of the numeric indicator to show a true number when configured appropriately - that was when I discovered this bug, which I at first mistook for a conversion error in LabVIEW.
    Is there a compliancy issue with EXT?
    While doing this work I've discovered that the EXT format is somewhat misleadingly labelled as "80-bit IEEE compliant" (it says so here), but that statement should be read with some suspicion IMO. The LabVIEW EXT is not simply IEEE 754-1985 compliant anyways, as that format would imply the x87 80-bit extended format. An x87 IEEE 754 extended precision float only has 63-bit fraction and a 1-bit integer part. That 1-bit integer part is implicit in single and double precision IEEE 754 numbers, but it is explicit in x87 extended precision numbers. LabVIEW EXT seems to have an implicit integer part and 64-bit fraction, thus not straight IEEE 754 compliant. Instead I'd say that the LabVIEW EXT is an IEEE 754r extended format, but still a proprietary one that should deserve a bit more detail in the available documentation. Since it's mentioned several places in the LabVIEW documentation that the EXT is platform independent, your suspicion should already be high though. It didn't take me many minutes to verify the apparent format of the EXT in any case, so no real problem here.
    Is there a genuine conversion error from EXT to U64?
    The integer 18446744073709549568 can be represented exactly as EXT using this binary representation (mind you that the numeric indicators won't display the value correctly, but instead show 18446744073709549600):
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111000000000000b
    --> Decimal: 18446744073709549568
    The above EXT value converts exactly to U64 using the To Unsigned Quad Integer "bullet". But then let's try to flip the blue bit from 0 to 1 in the fraction part of the EXT, making this value:
    EXT-exponent: 0x100000000111110b
    EXT-fraction: 0x1111111111111111111111111111111111111111111111111111100000000000b
    --> Decimal: 18446744073709550592
    The above EXT value is still within U64 range, but the To Unsigned Quad Integer "bullet" converts it to U64_max which is 18446744073709551615. Unless I've missed something this must be a genuine conversion error from EXT to U64?
    /Steen
    CLA, CTA, CLED & LabVIEW Champion

  • MIDP and floating-point numbers

    Hiya,
    I need to use floating numbers in a few occasions in a MIDP app that I'm programming at the moment. Whenever I do so though I get a nervous-breaking error at the pre-verification stage of the build. I am working with Wireless Toolkit 2.0 and the rror I get is:
    Building "MyMIDlet"
    Error preverifying class <ClassName>
    ERROR: floating-point constants should not appear
    com.sun.kvem.ktools.ExecutionException: Preverifier returned 1
    Build failedI get this error whenever I use a floating-point varible or even whenever I compare something to a floating-point value.
    It's got me really puzzled and I can't think of any way around the problem. Any help would be very much appreciated...
    Thanks!

    also WTK 2.1 (and CLDC 1.1I think ) does support FP. You think right, CLDC 1.1, that's when they added the floats (in theory a device could be MIDP 1.0 on top of CLDC 1.1, and it would also have floats). There's only one CLDC 1.1 device I've heard of: Nokia 6320.
    shmoove

  • 128-bit floating point numbers on new AMD quad-core Barcelona?

    There's quite a lot of buzz over at Slashdot about the new AMD quad core chips, announced yesterday:
    http://hardware.slashdot.org/article.pl?sid=07/02/10/0554208
    Much of the excitement is over the "new vector math unit referred to as SSE128", which is integrated into each [?!?] core; Tom Yager, of Infoworld, talks about it here:
    Quad-core Opteron? Nope. Barcelona is the completely redesigned x86, and it’s brilliant
    Now here's my question - does anyone know what the inputs and the outputs of this coprocessor look like? Can it perform arithmetic [or, God forbid, trigonometric] operations [in hardware] on 128-bit quad precision floats? And, if so, will LabVIEW be adding support for it? [Compare here versus here.]
    I found a little bit of marketing-speak blather at AMD about "SSE 128" in this old PDF Powerpoint-ish presentation, from June of 2006:
    http://www.amd.com/us-en/assets/content_type/DownloadableAssets/PhilHesterAMDAnalystDayV2.pdf
    WARNING: PDF DOCUMENT
    Page 13: "Dual 128-bit SSE dataflow, Dual 128-bit loads per cycle"
    Page 14: "128-bit SSE and 128-bit Loads, 128b FADD, 128 bit FMUL, 128b SSE, 128b SSE"
    etc etc etc
    While it's largely just gibberish to me, "FADD" looks like what might be a "floating point adder", and "FMUL" could be a "floating point multiplier", and God forbid that the two "SSE" units might be capable of computing some 128-bit cosines. But I don't know whether that old paper is even applicable to the chip that was released yesterday, and I'm just guessing as to what these things might mean anyway.
    Other than that, though, AMD's main website is strangely quiet about the Barcelona announcement. [Memo to AMD marketing - if you've just released the greatest thing since sliced bread, then you need to publicize the fact that you've just released the greatest thing since sliced bread...]

    I posted a query over at the AMD forums, and here's what I was told.
    I had hoped that e.g. "128b FADD" would be able to do something like the following:
    /* "quad" is a hypothetical 128-bit quad precision  */
    /* floating point number, similar to "long double"  */
    /* in recent versions of C++:                       */
    quad x, y, z;
    x = 1.000000000000000000000000000001;
    y = 1.000000000000000000000000000001;
    /* the hope was that "128b FADD" could perform the  */
    /* following 128-bit addition in hardware:          */
    z = x + y;
    However, the answer I'm getting is that "128b FADD" is just a set of two 64-bit adders running in parallel, which are capable of adding two vectors of 64-bit doubles more or less simultaneously:
    double x[2], y[2], z[2];
    x[0] = 1.000000000000000000000000000001;
    y[0] = 1.000000000000000000000000000001;
    x[1] = 2.000000000000000000000000000222;
    y[1] = 2.000000000000000000000000000222;
    /* Apparently the coordinates of the two "vectors" x & y       */
    /* can be sent to "128b FADD" in parallel, and the following   */
    /* two summations can be computed more or less simultaneously: */
    z[0] = x[0] + y[0];
    z[1] = x[1] + y[1];
    Thus e.g. "128b FADD", working in concert with "128b FMUL", will be able to [more or less] halve the amount of time it takes to compute a dot product of vectors whose coordinates are 64-bit doubles.
    So this "128-bit" circuitry is great if you're doing lots of linear algebra with 64-bit doubles, but it doesn't appear to offer anything in the way of greater precision for people who are interested in precision-sensitive calculations.
    By the way, if you're at all interested in questions of precision sensitivity & round-off error, I'd highly recommend Prof Kahan's page at Cal-Berzerkeley:
    http://www.cs.berkeley.edu/~wkahan/
    PDF DOCUMENT: How JAVA's Floating-Point Hurts Everyone Everywhere
    http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
    PDF DOCUMENT: Matlab's Loss is Nobody's Gain
    http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf

  • Does java has problem storing floating point numbers?

    Hi just a quick question. I have been programming in java at least 4, 5 year by now and I feel rather comfortable with programming in java. However, just other day I hit problem with java which I was hoping someone on this forum can fill in for me.
    Now please consider following simple statement:
    double d = 3.0 / 5 / 3;
    and lets say I printed this number
    why does java compute it as 0.19999999999999998 rather than 0.2
    and 2.44 � 7.006 as -4.566000000000001 instead of �4.566
    Frankly, until now I never know java behaved this way and it seems so bizarre.
    I�ve wrote similar program in c and perl just see what happens and they produced an expected result.
    Anyway the program I was writing at the time was really trivial stuff and this problem does not need argent solution but I would like know why did is happening and what might be the solution for it.
    Oh, just in case, the version of JDK I am using right now is (build 1.4.2-b28)
    Thanks

    Take a look at some of these links:
    http://java.sun.com/developer/JDCTechTips/2003/tt0204.html#2
    http://java.sun.com/developer/JDCTechTips/2001/tt0807.html#tip1

Maybe you are looking for

  • IDVD authored DVDs that won't play in domestic players

    For what it's worth with all the traffic here about discs not playing, here's my experience. Hope it helps. Although the following is a wee bit ranty, apologies. Having successfully authored many DVDs in Panther with iDVD 2, since completely upgradin

  • Backing up using Export/Import & ITL, XML file formats

    Using Windows XP I have reset my default Itunes folder to c:\Itunes (too big for MY Documents\...). then copied everything from c:\itunes to an external hard drive. (I didn't realize the ITL file doesn't move and therefore I may not have a good backu

  • Swing or JSP/Struts application?

    I'm deploying an application for a public hospital in Argentina and i have doubts about the framework to use. The application will be used in very old computers (pentium I/II or older) but will count with a good server (bought specifically for this u

  • 10.2.8, USB 1 & seperate computers.

    Hi there all. I just received an iPod Nano for christmas. A very exciting gift. Here's my problem. I currently have an older model G4 that i can only run 10.2.8 on. I don't have a DVD drive, so i can't install any newer OS. So i've read the discussio

  • Where do i add a new email alias?

    I can no longer find where i set up a new alias email address for my iCloud account. I have found "alternate" email addresses but this doesn't accept anything i try to add as an alias so I'm guessing its not that.. This USED to be easy.. Any ideas? T