Java rounding for Double data type

Hi,
Is there a way to change the default precision to 2 digits after decimal point for Double data type.

java.text.DecimalFormat:
import java.text.DecimalFormat;
DecimalFormat format = new DecimalFormat("#.00");
StringBuffer formatted = format.format(Math.PI, new StringBuffer(), new FieldPosition(0));
System.out.println(formatted); // prints out 3.14Probably as applicable today as it was 3 years ago

Similar Messages

  • Issue in trusted flat file recon for usr field having double data type

    Hi
    I have a field in usr schema which is of double data type. Now when i am doing gtc trusted flat file recon the user is getting created and all fields are getting populated but field having double data type doesn't get populated.
    I have tried in other way also. After creating the user, I am trying to update that field (which is of double data type) via gtc trusted flat file update but in that case no recon event is getting generated.
    Enviroment : OIM 11.1.1.5.0
    I am stuck. Please help.

    Somebody PLEASE some idea!!!
    Any thought on the same would be really appreciated.
    Thanks in advance.

  • Error ORA-01426: numeric overflow when Creating table with double data type

    Hi,
    I am using ODP.NET to create a table with data from SQL to Oracle. The problem is with double data type fields that is equivalent to FLOAT(49) in Oracle. My syntax is:
    CREATE TABLE SCOTT.ALLTYPES
    F1 NUMBER(10),
    F10 DATE,
    F2 NUMBER(10),
    F3 NUMBER(5),
    F4 NUMBER(3),
    F5 FLOAT(23),
    F6 FLOAT(49),
    F7 NUMBER (38,5),
    F8 NVARCHAR2(500),
    F9 NVARCHAR2(500)
    Th error is with field F6 but I am not sure what is the correct type equivalent to double in SQL.
    I woul appreciate if anyone can help me with this problem.
    Sunny

    Does this simple test work for you?
    In database:
    create table float_test
      f1 float(49)
    );C# code:
    using System;
    using System.Data;
    using System.Text;
    using Oracle.DataAccess.Client;
    using Oracle.DataAccess.Types;
    namespace FloatTest
      /// <summary>
      /// Summary description for Class1.
      /// </summary>
      class Class1
        /// <summary>
        /// The main entry point for the application.
        /// </summary>
        [STAThread]
        static void Main(string[] args)
          // connect to local db using o/s authenticated account
          OracleConnection con = new OracleConnection("User Id=/");
          con.Open();
          // will hold the value to insert
          StringBuilder sbValue = new StringBuilder();
          // create a string of 49 number 9's
          for (int i = 0; i < 49; i++)
            sbValue.Append("9");
          // command object to perform the insert
          OracleCommand cmd = con.CreateCommand();
          cmd.CommandText = "insert into float_test values (:1)";
          // bind variable for the value to be inserted
          OracleParameter p_value = new OracleParameter();
          p_value.OracleDbType = OracleDbType.Double;
          p_value.Value = Convert.ToDouble(sbValue.ToString());
          // add parameter to collection
          cmd.Parameters.Add(p_value);
          // execute the insert operation
          cmd.ExecuteNonQuery();
          // clean up
          p_value.Dispose();
          cmd.Dispose();
          con.Dispose();
    }SQL*Plus after executing above code:
    SQL> select * from float_test;
            F1
    1.0000E+49
    1 row selected.- Mark

  • Microsoft OLE DB Provider for Oracle: Data type is not supported.

    I got the error:
    Microsoft OLE DB Provider for Oracle: Data type is not supported.
    Shortly after upgrading from Oracle 8 to Oracle 9. I was advised to download more up to date oracle drivers, but I was wondering if there was a way to tell what version of the 'OLE DB Provider for Oracle' is already at. Is there a command I can use via SQL Plus or something?

    I have found Microsoft ODBC for Oracle to be more stable than the Microsoft OLEDB for Oracle driver. I have also found both Microsoft ODBC and OLEDB drivers to be more stable than the drivers from Oracle.
    You could always get the latest MDAC (Microsoft Data Access Components) from Microsoft's MSDN Download site and then get the ODAC (Oracle Data Access Components) from Oracle's OTN Download site. ODAC requires MDAC. And ODAC has the latest drivers.
    I suppose it would help to have the latest patches for your Oracle client software too. Maybe Oracle MetaLink would have these?
    It may even help to have the latest service pack for Visual Studio 6 (Visual C++ 6 and Visual Basic 6) too.

  • [SOLVED] Value too large for defined data type in Geany over Samba

    Some months ago Geany started to output an error whith every attempt to open a file mounted in smbfs/cifs.
    The error was:
    Value too large for defined data type
    Now the error is solved thanks to a french user, Pierre, on Ubuntu's Launchpad:
    https://bugs.launchpad.net/ubuntu/+bug/ … comments/5
    The solution is to add this options to your smbfs/cifs mount options (in /etc/fstab for example):
    ,nounix,noserverino
    It works on Arch Linux up-to-date (2009-12-02)
    I've writed it on the ArchWiki too: http://wiki.archlinux.org/index.php/Sam … leshooting

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

  • 'Value too large for defined data type' error while running flexanlg

    While trying to run flexanlg to analyze my access log file I have received the following error:
    Could not open specified log file 'access': Value too large for defined data type
    The command I was running is
    ${iPLANET_HOME}/extras/flexanlg/flexanlg -F -x -n "Web Server" -i ${TMP_WEB_FILE} -o ${OUT_WEB_FILE} -c hnrfeuok -t s5m5h5 -l h30c+5 -p ctl
    Which should generate a html report of the web statistics
    The file has approx 7 Million entries and is 2.3G in size
    Ideas?

    I've concatenated several files together from my web servers as I wanted a single report, several reports based on individual web servers is no use.
    I'm running iWS 6.1 SP6 on Solaris 10, on a zoned T2000
    SunOS 10 Generic_118833-23 sun4v sparc SUNW,Sun-Fire-T200
    Cheers
    Chris

  • Initialization for char data type?

    I observed that as String I can declare something like this:
    String str = ""; but for char data type, if I declare:
    char ch = ''; the compiler will have an error: "empty character literal". So how can I fixe this problem when I want to initialize my char variables as "empty"? Anyone got an idea?
    Message was edited by:
    hoangSoccer
    Message was edited by:
    hoangSoccer

    Strings (and StringBuffers/Builders) are collections* of characters - so it makes
    sense to be able to refer to a String with precisely zero characters. But char is not a collection of anything - it has whatever value it has and can never
    be empty.
    *collections with a small 'c'.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Data change after call xml transformation for RAW data type

    Hi,
      Can someone explain what is workarround or any help for below problem
    when call transformation is called for usr02 records, after xml string is generated is showing different data for RAW data type for example content of fields BCODE,PASSCODE etc get convert into different data
    for example: content of BCODE fields before xml transformation say -7F8087472FB996E5
    after xml transformation it display as f4CHRy+5luU=
    why is this so happening?  is this known behaviour after xml transformation or it is bug in xml transformation?
    thanks in advance.
    Regards,
    John.

    Hi,
    I think this is because RAW data is BASE64 encoded when using CALL TRANSFORMATION.
    Old post, but did you ever find a way to change that?
    Thanks

  • Mkisofs: Value too large for defined data type too large

    Hi:
    Does anyone meet the problem when use mkisofs command?
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    Warning: creating filesystem that does not conform to ISO-9660.
    mkisofs 2.01 (sparc-sun-solaris2.10)
    Scanning iso
    Scanning iso/rac_stage1
    mkisofs: Value too large for defined data type. File iso/rac_stage3/Server.tar.gz is too large - ignoring
    Using RAC_S000 for /rac_stage3 (rac_stage2)
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Thanks!

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • OPMN Failed to start: Value too large for defined data type

    Hello,
    Just restared opmn and it failed to start with folloiwing errors in opmn.log:
    OPMN worker process exited with status 4. Restarting
    /opt/oracle/product/IAS10g/opmn/logs/OC4J~home~default_island~1: Value too large for defined data type
    Does anyone have ideas about cause of this error? Server normally worked more than 6 month with periodic restarts...

    Hi,
    You could get error messages like that if you try to access a file larger than 2GB on a 32-bit OS. Do you have HUGE log files?
    Regards,
    Mathias

  • LOCID is not defiend for Order data type ORDR

    Dear All,
    I am trying to load data in to Order forecast details in Dynamic
    replenishment of SNC in the WEB UI, after entering the Product and
    Customer location, when I clicked on GO system is giving the below
    error.
    Time series error in class /SCF/CL_ICHDM_DEMANDMN_DATAAXS
    method /SCF/IF_ICHDMAXS_2_CNTL~SELECT
    Access error for time series type DFC01, key figure CORDEROR, ODM
    message: Parameter LOCID is not define for order data type ORDR
    Could you please help me out what to do in this case.
    I have re activated the following components mentioned, still system is
    giving the above problem.
    Activate Order Component
    Activate Order Data Type
    Activate Order Data Area
    and
    Activate Time Series Data Type
    Activate Time Series Data Area
    Thanks and Regards,
    T.Muthyalappa

    Hi,
    Make sure you have proper authorization.(sometime system does not display the error but user don't have authorization to create/update/modify table entries to be on safer side assign SAP_ALL profile to your ID if it is allowed as per your company security process).
    I think issue is with InfoObject 9ALOCID is not activated properly.
    You need activate  InfoObject 9ALOCID as mention i below step2 but it is better you follow below steps.
    1) Then force generate ODM.
    2)Go to transaction code:/N/SCA/TSDM09 enter time series DFC01 and select  Radio button Activate Planning object Structure.
    Regards,
    Nikhil.

  • Resources for designing data types and message types

    Hi
    I wanted to know if anyone can recommend me some good resources which can explain the design considerations for designing data types and message types in XI which can help promote reusability.
    Thanks.
    Best Regards,
    Kiran

    hi,
    there no rare rule whe you define your data type, why i tell you this, because you define a data type since the documentation that sender sistem team gives to you. so you only have to copy this structure to PI.
    now about Message Type less problems, because, you a Data type is assign to a Message Type, whats it means.
    when you define a DATA type you are defining the structure of you XML, que you assing the data type to a message type this structure pass to be an xml document.
    Thanks you
    Rodrigo

  • Rounding for Analytical Data

    Is there a function module that does Rounding for Analytical Data?
    If I use function module ROUND, 1.25 becomes 1.3 and 1.35 becomes 1.4
    But in analytical rounding rules, if the last digit is even round down and last digit is odd round up. So 1.25 becomes 1.2 and 1.35 becomes 1.4.

    megan,
    do one thing.
    check the last digit of the number(dec_digit in the code snipet). (1st digit after decimal as per your rule). u can pass it to string and then use split or some other thing to get the 1st digit after decimal.
    then use a if block either to floor or ceil.
    gv_val = dec_digit mod 2.
    if gv_val = 0.
       gv_result = floor (amount).
    else.
       gv_result = ceil (amount)
    endif.
    i am not givign a executed code here as i dont have sap currently with me. but use this logic. hope this will do your job.
    thanks
    Somu

  • PLEASE HELP!!!  Problem with Java and SQLServer Text data type

    Hi there,
    I have a java app. that reads from an MS SQLServer database. Originally, all long text fields were declared as NVARCHAR(200). The program worked fine.
    Someone then advised that I change all long text fields to the TEXT data type. The program now crashes out with the following Exception:
    "java.sql.SQLException: [JRun][SQLServer JDBC Driver]This ResultSet can not re-read row data for column 25."
    Basically, I have a method that retrieves a resulset and iterates through it. The resultset is passed to another method during each iteration. In the example below, the 'specialNote' field used to be NVARCHAR(200). The code worked fine. Then when it was changed to TEXT, the program no longer works with the above Exception thrown.
    Anyone know any special way SQLServer TEXT data types need to be handled?
    Thanks for any advice!
    The code looks something like this in functionality:
    <CODE>
    public void method1 (Connection conn)
    Resultset rs = conn.createStatement().executeQuery("SELECT * FROM ProductBB");
    while (rs.next())
    method2(rs);
    public void method2 (ResultSet rs)
    String str = rs.getString("specialNote");
    </CODE>

    Hi JWoods,
    Thanks for the suggestion. I originally had the code do what you suggested, ie, get the resultset then retrieve the data all within the same method. The data is then used to set properties in an object.
    When I had to create another method that also retrieved a resultset but using a different primary key, then also use the returned data to set the properties in the same type of object, I didn't want to repeat the setter code. That's why I decided to pass the resultsets to the same method that did the property setting.
    Unfortunately, it stopped working with the data type change.
    Any other thoughts?

Maybe you are looking for