Problem with non currency field calculations to become curr

Hi guys,
Is there a problem if I have a QUAN field and DEC field forming to become a CURR field? I mean a have this computation below:
v_var1 = v_var2 * v_var3.
where v_var1 type QUAN, v_var2 type DEC and v_var3 type CURR...
would it incur any problem with the calculations?
Thanks!

Hi,
Did you try ?
Worked for me flawlessly
tables bseg.
parameters : qty like bseg-menge,
                   amt like bseg-dmbtr.
data : result like bseg-dmbtr.
result = qty * amt.
            write result.
The only issue is that the result will be rounded upto 2 decimals.
But if you declare result as
data : result(13) type p decimals 3.
Then there will be no issues.
regards,
Advait
Edited by: Advait Gode on Oct 3, 2008 3:59 PM

Similar Messages

  • Problem with Non-English Fields Output to PDF by JASPER in JDev10.1.3

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

  • Problem with a Currency field in Adhoc Query - HR

    Hi,
    I have an Adhoc query that uses Custom infotype fields (Z infotype and z fields).
    The currncy field also has a reference field in the infotype (of type waers).
    Wehen we try to get the ouput of the Adhoc Query it gives following error:
    The report cannot be generated because the internal description is invalid or incomplete, or because the selection screen is too large.
    Regenerate the assigned InfoSet, and read the log. If the InfoSet is OK, make sure that at least one field is given as output.
    If you used the 'Refresh' icon to start the query, use the 'Output' menu option to execute the query. This gives you a full screen display of the data.
    If an output was generated, the query cannot work with actual data in the construction view. In this case, always use the 'Output' function to execute the query.
    I have already tried a number of solutions:
    1> Regenerate the infoset...
    2> make the field as an additional field and write my own code for it (the error comes before the code as i kept a breakpoint but it stopped before that)
    if i add other fields of this infoset instead of this field, then those appear in the output.
    Any solutions ??
    thanks in advance,
    Anuj

    Hi,
    Is the problem not clear or no one has an answer?
    Please reply with some suggestions..
    Regards,
    Anuj.

  • Problem with Local Currency field in PIPE monitor

    Hello, We have configured PIPE in BW and sent idoc WPUBON (sales) there.
    But the task u2013 u201CSupply to BW immediatelyu201D are failed :
    The error message is :
    "Field Local Currency does not have a value (initial).
    System Response
    Current processing requires the field to be enriched within the master data checks.
    Procedure for System Administration
    Check Customizing for the master data checks."
    My question is if the problem are reffered to PIPE configuration problems or BW master data downloading ?
    I didnu2019t find this field anywhere in /POSDW/img. Maybe this is BW master data problem?
    But I am confused the phrase u201CCustomizing for the master data checksu201D . Is it still the PIPE cutomizing problem?
    Where is the "Customizing for the master data checks"? What does it mean?
    Thank You.

    Hi
    Can any one please tell me do you have any solution for this. I am getting problem in quality box.
    Regards
    Mark

  • Problem with the currency fields in the table cdpos

    Hi Gurus,
           I have observed one thing with the cdpos table like whenever we made changes to the currecy( and numeric fields also ,i am not sure ) fields ,then in the cdpos table it is showing a new entry for that change but not showing that old value, why? .But when we made any changes to date fields ,then it is storing that old date.
    So Is there any specific feature in this table.
    Thanks & Regards,
    Rakesh.

    Hello
    In ABAP-programm try this:
    DATA:       NEW LIKE CDPOS-VALUE_NEW,
          OLD LIKE CDPOS-VALUE_OLD,
          NEW1 TYPE MENGE_D,
          OLD1 TYPE MENGE_D.
    FIELD-SYMBOLS: <f> TYPE ANY.
    * make select from CDHDR and CDPOS
    IF CDPOS-FNAME = 'MENGE'.
      MOVE CDPOS-VALUE_NEW TO NEW.
      MOVE CDPOS-VALUE_OLD TO OLD.
      ASSIGN NEW TO <f>.
      CATCH SYSTEM-EXCEPTIONS CONVT_NO_NUMBER = 2
           OTHERS = 4.
           NEW1 = <f>.
      ENDCATCH.
      ASSIGN OLD TO <f>.
      CATCH SYSTEM-EXCEPTIONS CONVT_NO_NUMBER = 2
           OTHERS = 4.
           OLD1 = <f>.
      ENDCATCH.
    ENDIF.
    After this in NEW1 and OLD1 you will see values.

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Problem with a Dynpro field (type numc)

    hi everybody.
    I'm developping a ModulPool application in wich i have 2 RadioButtons with 2 textbox fields.
    What i pretend to do is, when the user clicks a radiobutton and strikes Intro, enable the corresponding textbox field and disable the other one.
    My code runs fine, but i have a little problem. When I loop the screen table, to set the appropiate value to the 'input' property, in this case, when i try to disable it (input = '0'), I get a zero character in that field. This field has NUMC type, i'm sure this is the problem
    'cos i've got no problem with the other field (type char). But i can't solve it.
    Anybody's got an idea?
    Thanks

    What you are seeing is the normal behavior of a numeric field represented by the SAPgui.  This is how all numeric fields are displayed via SAPgui.  If you don't want to see the 0,  then you just change the field type to CHAR and handle accordingly.
    Regards,
    Rich Heilman

  • Problems with non-ASCII characters on Linux Unit Test Import

    I found a problem with non-ASCII characters in the Unit Test Import for Linux.  This problem does not appear in the Unit Test Import for Windows.
    I have attached a Unit Test export called PROC1.XML  It tests a procedure that is included in another attachment called PROC1.txt. The unit test includes 2 implementations.  Both implementations pass non-ASCII characters to the procedure and return them unchanged.
    In Linux, the unit test import will change the non-ASCII characters in the XML file to xFFFD. If I copy/paste the the non-ASCII characters into the Unit Test after the import, they will be stored and executed correctly.
    Amazon Ubuntu 3.13.0-45-generic / lubuntu-core
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
    In Windows, the unit test will import the non-ASCII characters unchanged from the XML file.
    Windows 7 Home Premium, Service Pack 1
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
    If SQL*Developer is coded the same between Windows and Linux, The JVM must be causing the problem.

    Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
    See the javadocs for the javax.mail.internet package for the list of properties.
    Yes, the FAQ entry should contain those details as well.

  • Problem with non iPhone users receiving my text sent pictures. Can I make them smaller files?

    Problem with non iPhone users receiving my text sent pictures.  Can I make them smaller files? I know that is possible for email sent pics.

    My bad,  It was a Verizon problem.  When they upgraded my phone they were supposed to remove all blocks on messaging . Found out they did not do this.  All messaging now works.  Thanks for listening.

  • Problems with non-ascii keywords

    I have some problems with non-ascii keywords that makes the whole keyword feature useless for me. I don't know if I'm doing something wrong or if there something I'm missing completely.
    The problem is that when I enter something like "grön" iPhoto sometimes refuses me to type in "grön" on another photo and "eats" the "ö". And if I select the matching keyword from the popup list iPhoto has changed the originally "grön" to "gr¨ön" (if that comes out right on the web). Here are a few screenshots to show what happens
    If someone knows what happens I would really appriciate some hints on how to avoid this.
    Note that adding/editing keywords works just fine in Aperture.

    There is a long standing issue with iPhoto and non-ascii characters. I know of no solution.
    iPhoto menu -> Provide iPhoto Feedback and report it as a bug.
    Regards
    TD

  • JPA - How to map relation with NON-KEY field.

    Hello.
    Problem with mapping is NullPointerException when calling EntityManager em.createNativeQuery:
    Table1 (Bm_Treeassoc):
    MY_ID (Primary Key)
    BOOKMARKID (-> MY_ID in Table2)
    Text
    Table2 (Bm_Bookmark):
    MY_ID ( Primary Key)
    Text
    //CLASS BmTreeassoc
    @OneToMany(targetEntity=BmBookmark.class, mappedBy="treeMaster", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
         private BmTreeassoc treeMaster = null ;
    //CLASS BmBookmark
    @ManyToOne(targetEntity=BmTreeassoc.class, fetch = FetchType.LAZY, cascade = CascadeType.ALL, optional = true)
         @JoinColumn(name="MY_ID", referencedColumnName="BOOKMARKID", unique=true)     
         private ArrayList<BmBookmark> bookmarks = new ArrayList<BmBookmark>() ;
    This Leads to the exception.
    Mapping form MY_ID to MY_ID instead BOOKMARKID will not throw the exception,
    so I assume I have a problem with the KEY Field?
    Any ideas?
    Kind regards
    Frank

    OK,
    after reflecting (after maniacally trying for days).
    Here is the answer by myself:
    I do not use the mapping stuff at all no more.
    I solved my join by using a view:
    CREATE VIEW SAPDEMO.VTree
         AS SELECT t.TreeID, b.MY_ID, b.CLIENTID, b.Nickname, u.URL
         FROM SAPDEMO.BM_TreeAssoc as t
              JOIN SAPDEMO.BM_BOOKMARK as B ON t.BookmarkID = b.MY_ID
              JOIN SAPDEMO.BM_URL as U ON b.URL = u.MY_ID
    Create the entity and then do a:
    select * from VTREE where clientid=1 and treeid=446
    Works great, simple, fast.

  • Problem with the quantity field

    hi every one
    i am facing a problem with the quantity field (vbap-kwmeng)
    as per my requirement i need to display this quantity field along with some other item fields from VBAP in an alv grid.
    among all the fields displayed in the alv grid only this quantity field is editable(end user can change this quantity)
    once end user changes this quantity and press save button i need to capture this new quantity in my internaltable.
    problem is input of length of quantity is 15 and the output length is 19
    so when i am pressing save
    say my quantity is 50 when i am pressing save '0.050' is coming because of the length difference
    how can i capture the original changed value.
    vamsi

    what about define two fields in  you inner table ,one as char and the other as vbap-kwmeng, you can show the char one in the ALV gird , when user input value and press SAVE ,you can move the value to vbap-kwmeng.
    you can test it,mybe some one has one better idea.

  • Problem with adding new field to the mass change screen in FBL5N

    Hi,
    We have a problem with adding the field XREF3 to the mass change screen. We followed steps described in the SAP Note 640908, but the result is that when we try to mass change some documents in FBL5N and enter some values in the mass change screen, a message appears: "Please enter at least one new value" and nothing is changed.
    If you have faced with such a problem, we would be grateful if you give us some tips.
    Regards,
    Miłosz Włodarczyk

    The problem has been resolved: we didn't activate a code in SE80.

  • Problem with Non-cumulative key figure.

    Hi all,
    I am facing the problem with the Non-cumulative Key Figure (Quantity). I have created and loaded data to the non-cumulative InfoCube. <b>This cube is defined by me to test the non-cumulative key figure.</b>
    <b>In BEx query the non-cumulative key figure and cumulative key figure (Value change) both display same values, i.e. non-cumulative key figure contains the same values which we have loaded for cumulative value change. Non-cumulative key figure is not calculated based on associated cumulative key figure.</b>
    I have done the following while defining the non-cumulative InfoCube:
    1. Created a non-cumulative key figure which is associated with a cumulative key figure (value change).
    2. Loaded data to non-cumulative InfoCube from flat file.
    3. Compressed data in non-cumulative InfoCube after the load.
    Note:
    1. Validity area is determined by the system based on the minimum and maximum date in data.
    2. Validity determining characteristic, 0CALDAY is the default characteristic selected by the system.
    Is there any other settings to be done?
    Please help me in resolving this issue.
    Thanks and regards
    Pruthvi R

    Being a non-cumulative KF, total stock is automatically takes care of that.
    Try putting all the restrictions which you have included for total receipts and total issues, for eg,  restrict Total Stock with the movement types used in Receipts as well as Issues.
    Check and revert.
    Regards
    Gajendra

  • Problems with EURO currency character

    Hi,
    I'm trying to insert the EURO currency character (€) in a CHAR field from a Java client but after the insert in the field I find a ? turned upside down.
    I'm using
    - Oracle DB server 11.2.0.1.0 running with character set US7ASCII (same problem with a different Oracle server with WE8ISO8859P1 encoding) on a Linux Red Hat server
    - Java client with JDBC driver 11.2.0.1.0 (ojdbc5.jar) with JDK 5 running on Windows XP: the encoding of the JVM is CP1252.
    I've googled a lot but found only solutions like "... change the encoding of the DB ...": I can't change the DB encoding.
    Any idea?
    Thanks.
    Here's is a test case:
    ************ begin test case ************
    Statement stmt = null;
    PreparedStatement pst = null;
    Connection conn = null;
    ResultSet rs = null;
    try {
         Class.forName("oracle.jdbc.driver.OracleDriver");
         conn = DriverManager.getConnection("jdbc:oracle:thin:@SERVER:1521:SID", "USER", "PASSWORD");
         stmt = conn.createStatement();
         stmt.execute("CREATE TABLE JJOVA(A CHAR(5) NOT NULL)");
         stmt.close();
         pst = conn.prepareStatement("INSERT INTO JJOVA(A) VALUES(?)");
         pst.setString(1, "€");
         pst.execute();
         stmt = conn.createStatement();
         rs = stmt.executeQuery("select A from JJOVA");
         while (rs.next()) {
              String tmp = rs.getString(1);
              System.out.println(tmp);
         rs.close();
         pst.close();
         stmt = conn.createStatement();
         stmt.execute("DROP TABLE JJOVA");
    } catch(Exception e) {
         e.printStackTrace();
    } finally {
         try {
              if (rs != null) {
                   rs.close();
              if (pst != null) {
                   pst.close();
              if (stmt != null) {
                   stmt.close();
              if (conn != null) {
                   conn.close();
         } catch(Exception dontcare) {}
    ************ end test case ************
    Regards,
    Andrea

    Thank you Oradba for the explanation of the COBOL program behavior: we're going to change the DB character set (export/import seems not required, we've found an Oracle Note 257722.1 "Changing WE8ISO8859P1 to WE8ISO8859P15"). I'll mark your answer as the (most) correct.
    Oviwan, NLS_NCHAR_CHARACTERSET is AL16UTF16: I tried with an NCHAR field but I can't even store correctly accented vowels.
    Thank you very much everyone,
    Andrea

Maybe you are looking for