Mapping of Decimal feild

in my mapping senario, i have 2 feilds in idoc as source which are currency amounts  with 2 decimals. However in the target file that is written i dont get the decimal value.
eg: if the value in idoc record is 12.00
then in the file it comes as only 12.
Please help,
Thanx
Tarun

Hi Tarun,
    Test your mapping once with the same payload and check the values for these fields whether decimal places has populated or not.if NO then use formatNumber and give the property as 0.00 , it will populate the value by adding the two decimals.
Check the same and let me know u r proceedings.
Cheers
Veera
>>>Reward points,if it is needful.

Similar Messages

  • Inbound Invoice idoc segment feilds mapping to SAP feilds

    Hi,
    Could you please help me to find out the tables and feilds in SAP for the inbound invoice idoc type INVOIC01. I need to map the idoc segment fields with SAP fields and respective tables.
    Thanks in Advance
    Satish

    Hi Satish,
    That's quite a big case to get such mapping - why would you need it really? You can process Idoc with std process codes and you will see the std mapping then happening.
    You can also go through logic of processing modules:
    Function Module Name          Short text for function module
    IDOC_INPUT_INVOIC_FI           EDI: Invoice Receipt (INVOICE)
    IDOC_INPUT_INVOIC_MM           EDI: Invoice Receipt (INVOICE)
    IDOC_INPUT_INVOIC_MRM
    For sure you will get RBKP for header, RSEG for items, RBTX for taxes and some more, depending on what you get in the invoice and what are your booking and matching processes.
    Best Regards,
    Tomek

  • DatabaseMetaData.getTypeInfo has no mapping for DECIMAL

    I've written a generic utility for migrating tables from one backend database to another. It utilizes the JDBC DatabaseMetaData.getTypeInfo() method to return both the universe of java.sql.Types that the backend supports, and the mapping from java.sql.Types to the specific database data type. In theory, if I run this against both the source and the destination database JDBC driver, I should be able to write create table statements under program control from any backend to any backend - presuming that they both support java.sql.Types for all the columns. And, in fact, this is what I've done. In the event that the source database supports a java.sql.Types type that the destination database does not - I error out with a message describing the difficulty.
    For its maiden voyage, I had it read a SQL Server database as a source, and an Oracle database as a destination. Imagine my surprise when my program reported that it couldn't migrate the table because Oracle doesn't support java.sql.Types = 3 (DECIMAL)! At first I thought this must be my bug. But, I've run a tester program against two different Oracle type 4 drivers using DatabaseMetaData.getTypeInfo() and neither of them return ResultSet output that lists a DATA_TYPE of 3 as one of the supported java.sql.Types.
    What goes on here? I would think that DECIMAL would map to Oracle NUMBER type quite well.
    Here's the business portion of the simplified tester program:
    void tester(Connection c)
    try
    DatabaseMetaData dbmd = c.getMetaData();
    ResultSet r = dbmd.getTypeInfo();
    while(r.next())
    System.out.println(r.getString("TYPE_NAME") + " | " +
    r.getInt("DATA_TYPE"));
    catch(SQLException se) {}
    ...and here's it's output:
    INTERVALDS | -104
    INTERVALYM | -103
    TIMESTAMP WITH LOCAL TIME ZONE | -102
    TIMESTAMP WITH TIME ZONE | -101
    NUMBER | -7
    NUMBER | -6
    NUMBER | -5
    LONG RAW | -4
    RAW | -3
    LONG | -1
    CHAR | 1
    NUMBER | 2
    NUMBER | 4
    NUMBER | 5
    FLOAT | 6
    REAL | 7
    VARCHAR2 | 12
    DATE | 91
    DATE | 92
    TIMESTAMP | 93
    STRUCT | 2002
    ARRAY | 2003
    BLOB | 2004
    CLOB | 2005
    REF | 2006
    If I map to java.sql.types I get this:
    INTERVALDS | null
    INTERVALYM | null
    TIMESTAMP WITH LOCAL TIME ZONE | null
    TIMESTAMP WITH TIME ZONE | null
    NUMBER | BIT
    NUMBER | TINYINT
    NUMBER | BIGINT
    LONG RAW | LONGVARBINARY
    RAW | VARBINARY
    LONG | LONGVARCHAR
    CHAR | CHAR
    NUMBER | NUMERIC
    NUMBER | INTEGER
    NUMBER | SMALLINT
    FLOAT | FLOAT
    REAL | REAL
    VARCHAR2 | VARCHAR
    DATE | DATE
    DATE | TIME
    TIMESTAMP | TIMESTAMP
    STRUCT | STRUCT
    ARRAY | ARRAY
    BLOB | BLOB
    CLOB | CLOB
    REF | REF
    Notice that NUMERIC is listed - but not DECIMAL.
    SO - what is my question? First off, have I missed something obvious? If not, second question - is there a work around short of checking to see whether getDatabaseProductName() returns 'Oracle' and building my mapping tables from hard code instead of the Oracle driver?
    Thanks in advance!

    J,
    The following is merely for your information (using JDK 1.5):
    import java.sql.Connection;
    import java.sql.DatabaseMetaData;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import oracle.jdbc.pool.OracleDataSource;
    public class DbmdTest {
      public static void main(String[] args) {
        Connection c = null;
        OracleDataSource ods = null;
        ResultSet rs = null;
        try {
          ods = new OracleDataSource();
          ods.setURL("jdbc:oracle:thin:scott/tiger@//host:1521/orcl");
          c = ods.getConnection();
          DatabaseMetaData dbmd = c.getMetaData();
          System.out.println("\nDatabase: " + dbmd.getDatabaseProductVersion());
          System.out.println("\nJDBC Driver: " + dbmd.getDriverName() + " " +
                                                          dbmd.getDriverVersion());
          System.out.printf("\n%-30s %-4s %-10s %s\n",
                            "JDBC Type",
                            "Code",
                            "Size",
                            "Oracle Type");
          System.out.println("------------------------------ ---- ---------- " +
          rs = dbmd.getTypeInfo();
          while (rs.next()) {
            System.out.printf("%-30s", rs.getString(1));
            System.out.print(" ");
            System.out.printf("%4d", rs.getInt(2));
            System.out.print(" ");
            System.out.printf("%10d", rs.getInt(3));
            System.out.print(" ");
            System.out.println(rs.getString(13));
        catch (Exception x) {
          x.printStackTrace();
        finally {
          if (c != null) {
            try {
              c.close();
            catch (SQLException xSql) {
              System.err.println("Failed to close database connection.");
              xSql.printStackTrace();
          if (ods != null) {
            try {
              ods.close();
            catch (SQLException xSql) {
              System.err.println("Failed to close datasource.");
              xSql.printStackTrace();
    }And here is the output of the above:
    $ java -cp '.;ojdbc14.jar' DbmdTest
    Database: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
    JDBC Driver: Oracle JDBC driver 10.2.0.1.0
    JDBC Type                      Code Size       Oracle Type
    INTERVALDS                     -104          4 INTERVALDS
    INTERVALYM                     -103          5 INTERVALYM
    TIMESTAMP WITH LOCAL TIME ZONE -102         11 TIMESTAMP WITH LOCAL TIME ZONE
    TIMESTAMP WITH TIME ZONE       -101         13 TIMESTAMP WITH TIME ZONE
    NUMBER                           -7          1 NUMBER
    NUMBER                           -6          3 NUMBER
    NUMBER                           -5         38 NUMBER
    LONG RAW                         -4 2147483647 LONG RAW
    RAW                              -3       2000 RAW
    LONG                             -1 2147483647 LONG
    CHAR                              1       2000 CHAR
    NUMBER                            2         38 NUMBER
    NUMBER                            4         10 NUMBER
    NUMBER                            5          5 NUMBER
    FLOAT                             6         63 FLOAT
    REAL                              7         63 REAL
    VARCHAR2                         12       4000 VARCHAR2
    DATE                             91          7 DATE
    DATE                             92          7 DATE
    TIMESTAMP                        93         11 TIMESTAMP
    STRUCT                         2002          0 STRUCT
    ARRAY                          2003          0 ARRAY
    BLOB                           2004         -1 BLOB
    CLOB                           2005         -1 CLOB
    REF                            2006          0 REFGood Luck,
    Avi.

  • Why is it such a pain to use java in a country that uses commas as decimal?

    Why is it such a pain to use java in a country that uses commas as decimal separator?
    A few weeks back I've asked here about the keypad decimal key. For some reason, java doesn't map the decimal key to a comma on the Portuguese (Portugal) keyboard layout. I've got no answer and I ended up using a custom plainDocument on the JTextFields to replace all points with commas.
    Now, I've just spent the whole morning trying to store and use decimal numbers properly. For some reason, a Double/Float .valueOf method (or the corresponding parse method) simply ignores the locale in use and uses US defaults when parsing the string. I can't parse anything with commas in those methods and I should, as it is the decimal separator for the system and default locale being used by java.
    First of all, I shouldn't be expected to perform replacements on every single operation that comes with a comma and I obviously can't be expected to program my own locale checking to decide what decimal separator to use in each final system. Second, is there any way to work with numbers seamlessly, without having to know the locale of the end user?
    I'm sorry if this is all my fault for doing something completly wrong, I'm new to java and I did search around to no avail. I'm really frustrated with what seems to be a complete lack of support in java for locales other than the US one.

    Good old Cobol has the "DECIMAL-POINT IS COMMA" clause... And isn't it great? :)
    Second, is there any way to work with numbers seamlessly, without having to know the locale of the end user?Consider "123.456". In some locales, this number is one hundred twenty-three thousand, four hundred fifty-six. In other locales it is one hundred twenty-three and four hundred fifty-size thousandths. How will you be able to determine which, without a locale?That's not what I've meant. Java should know the locale and behave accordingly. I don't have to know the locale of the end user since it might vary greatly. My point is that if strings are flying around with commas and if comma is the decimal separator on the end user's machine, any method aimed at parsing a numeric value out of a string should regard commas as such. I'm constantly replacing dots with commas and vice versa which could cause trouble if a different locale is used.
    And I mean that as a rant. Given my inexperience with Java, there might be good reasons for such a behaviour as baftos argued. What I'm really interested is in finding the proper way to deal with this issue.
    Have you tried the NumberFormat.parse? I will now.
    Edited by: Smigh on Apr 9, 2008 9:21 AM

  • Correct mapping of NUMC

    Hi,
    After upgrade to SAP_GWFND 740 SP7 we are getting some errors due to mapping of NUMC to edm.string. Can anyone tell me the correct and recommended way of mapping NUMC to an EDM datatype?
    From the best practices here OData Best Practices - SAP NetWeaver Gateway Foundation (SAP_GWFND) - SAP Library I can tell that mapping to decimal and int is not recommendend. In SAP note 1977590 it is described that mapping to string will give error whilst mapping to decimal might only give a warning. Also in this post NUMC to Edm.Decimal or Edm.IntXX per best practicesfrom SP8 - But how? it is described that a warning is the outcome. But I can't find anywhere describing the best way to map NUMC.
    Can anyone guide me?
    Best regards
    Jan

    Hi Ashwin Dutt R, Chandrashekhar Mahajan and others,
    The problem is that when we try to use edm.string with max length 8 as shown below for mapping numc of length 8 we get the compile error below. Do you think it is a product error -> OSS?

  • Mapping logic required

    Hi All,
    A segment E1BLINE in the source IDOC can repeat multiple times(0..999) and at the target that many times the record needs to be generated.
    In the target record strucutre a field say ABC is mapped to a feild in XYZ in segment   E2ES (0..999) .But the test IDOC doesnt have these segment data ,suppose if segmnt E1BLINE repeats 2 times i am getting 2 records in target structure but the field ABC should have blank values in each record as  if there is no data in the source.I am unable to get the logic .
    Thanks in advance

    Hi Priyanka,
    my actual requirement is as follows
    for example:
    There are 2 segments  LINE Segment     E2ES Segment.   I need to generate target file records based on LINE SEgmentIn my input data I am getting 2 LINE SEGMENTS so I am generating 2 records in the output file. But some of the fields in target comes from E2ES Segment.
    LINE  segment 2 times so no of recordsets =2 .(which i am able to generate)
    E2ES  segment has 4 fields.     QUALI1 , FIELD1,QUALI2 , FIELD2
    If QUALI1 = 100 ,  FIELD1--> ABC
    If QUALI2 = 200 ,  FIELD2--> HBY
    If QUALI2 = 200 ,  FIELD2--> HBY
    If QUALI2 is used i can easily get 2 HBY fields at target .But how do i get two  ABC  fields

  • Transformation Rule: Error while loading from PSA to ODS using DTP

    Hi Experts,
    I am trying to load data from PSA to ODS using DTP. For about 101 records I get the following error:
    "Runtime error while executing rule -> see long text     RSTRAN     301"
    On further looking at the long text:
    Diagnosis
        An error occurred while executing a transformation rule:
        The exact error message is:
        Overflow converting from ''
        The error was triggered at the following point in the program:
        GP4808B5A4QZRB6KTPVU57SZ98Z 3542
    System Response
        Processing the data record has been terminated.
    Procedure
          The following additional information is included in the higher-level
         node of the monitor:
         o   Transformation ID
         o   Data record number of the source record
         o   Number and name of the rule which produced the error
    Procedure for System Administration
    When looking at the detail:
    Error Location: Object Type    TRFN
    Error Location: Object Name    06BOK6W69BGQJR41BXXPE8EMPP00G6HF
    Error Location: Operation Type DIRECT
    Error Location: Operation Name
    Error Location: Operation ID   00177 0000
    Error Severity                 100
    Original Record: Segment       0001
    Original Record: Number        2
    Pls can anyone help in deducing and pointing this error to the exact spot in the transformation rule
    Thanks & Regards,
    Raj

    Jerome,
    The same issue.
    Here are some fields which are different in terms of length when mapped in transformation rules
    ODS                    |Data Source
    PROD_CATEG     CHAR32           |Category_GUID      RAW 16
    CRM_QTYEXP     INT4          |EXPONENT      INT2
    CRM_EXCRAT     FLTP16          |EXCHG_RATE     Dec 9
    CRM_GWEIGH     QUAN 17, 3     |Gross_Weight     QUAN 15
    NWEIGH          QUAN 17, 3     |Net_Weight     QUAN 15
    CRMLREQDAT     DATS 8          |REQ_DLV_DATE     Dec 15
    The difference is either some dats field are mapped to decimal, or the char 32 field is mapped to raw 16 OR Calweek, Calmonth is mapped to Calday
    Both mostly all the ods field size is greater than the input source field.
    Thanks
    Raj

  • Issue while retrieving data from BAPI

    Problem:
    The initial exception that caused the request to fail, was:
    com.sap.dictionary.runtime.DdException: Wrong amount type (not decimal): Unit service cannot be instantiated
    at com.sap.dictionary.runtime.DdBroker.getUnitService(DdBroker.java:215)
    at com.sap.dictionary.runtime.DdBroker.getUnitService(DdBroker.java:233)
    at com.sap.tc.webdynpro.services.datatypes.core.DataTypeBroker.getUnitService(DataTypeBroker.java:337)
    at com.sap.tc.webdynpro.progmodel.context.DataAttributeInfo.initReferenceAttribute(DataAttributeInfo.java:342)
    at com.sap.tc.webdynpro.progmodel.context.NodeInfo.initStructureType(NodeInfo.java:708)
    ... 30 more
    I have a BAPI  - ZPM_CREATE_ORDER. I am trying to write a record to that BAPI. Did the Model Import from R/3[including the Commit ].
    The structures of BAPI are like this:
    Zpm_create_Order_Input
            Order         -  Zpm_Hdr
         Operations    -  [ Line Items structure ]
    I have created the mapping between Model & Custom Controller with Order & Operations. Did the mapping between the Custom Controller & the View Controller. Binded them to the UI elements from View Controller with an input form.
    But there are ABAP - decimal fields under Operations -  price & Amount  which are a Currency field and Quantity . So when I just try to run the application, I get the amount mismatch error. When I say run, I could n’t even see the GUI even. I get the 500 error with the above problem.
    I read in the forums that I need to declare as a Simple type and re cast the fields to right type and bind that structure back. How ever, I could n’t get the complete understanding of it.
    For a Retrieval Bapi with the exact same structure, the same issue if I map the Operations. How ever, if I skip the Operations and just map the Order i.e., header, then I could retrieve the records.
    Interestingly even if I just map a String(char) field under Operations structure I am getting teh same error.
    Any help is appreciated. Thanks in advance...

    Satyajit,
    You are right. One of the fields needed is Waers(currency) which I have mapped from the Model to Custom Controller. I think, that is one of the causes but there are other fields too (for instance, Decimal type).
    The surprising thing to me is that, even if I just map the character fields in that structure and not map the Decimal or Currency fields, then also I am getting the same error.
    Thanks for the Reply...
    --Vivek

  • Idoc to file problem

    I am doing idoc to file scenario.
    1.i have just mapped to 6 feilds from IDOC.....but while executing IDOC..i get 20 fields frm R3....will that cretae a problem....Because i have not mapped to thos fields of the receiver...
    2.while executing the mapping program in IR,it succesfully executes...but in sxmb_moni,it throws an error saying mapping exception....

    Hi,
    On receiver side you have head ,tail and body segments.
    You need to map these with the header nodes of IDOC corresponding to teh fields
    for e.g.
    If in IDOC, you have segment1 having three fields field1, field2, field3 etc.
    Then if you need to map the field1 with target side field under header segment -->field1.
    then the target side Header segment should be mapped to segment1 of IDOC.
    if the fields from Segment1 are used to map with head as well as tail segments of target..then you can multi-map both head and trail with segment1.
    Thanks
    Swarup

  • Problem loading data from the PSA to the InfoCube

    Hello experts.
    I'm having a problem loading data from the PSA to the InfoCube.
    I'm using a DTP for this process but is happening the following error:
    "Diagnosis
          An error occurred while executing the transformation rule:
          The exact error message is:
          Overflow converting from''
          The error was triggered at the point in the Following Program:
          GP4KMDU7EAUOSBIZVE233WNLPIG 718
      System Response
          Processing the record date has Been terminated.
    Procedure
          The Following is additional information included in the higher-level
         node of the monitor:
         Transformation ID
         Data record number of the source record
         Number and the name of the rule Which produced the error
    Procedure for System Administration
    Have already created new DTP's deactivate and reactivate the InfoCube, the transformation, but solves nothing.
    Does anyone have any idea what to do?
    Thank you.

    HI,
    Is it a flat file load or loading frm any data source?
    try to execute the program GP4KMDU7EAUOSBIZVE233WNLPIG 718 in Se38 and check if its active and no syntax errors are there.
    Check the mapping of the fileds in transformations weather
    some data fileds are mapped to decimal or char 32 filed is mapped to Raw 16
    or calweek, calmonth mapped to calday etc.
    Check in St22 if there any short dumps ..
    Regards
    KP

  • GetTypeInfo incomplete?

    I'm using JDBC's metadata getTypeInfo method to obtain a mapping between database typenames and JDBC types. This in order to be able to create my database tables (and use other DDL commands) in a universal way, using the obtained mapping to finally map to database types.
    However, the Oracle (8.1.6) JDBC driver seems to give me incomplete results: it does, amongst others, not list a mapping for DECIMAL, BINARY, DATE, DOUBLE or TIME. The Oracle documentation contains more complete lists which include the types mentioned above.
    Is there any other way to obtain a more complete list in a generic (database independent) way?
    -Otto Perdeck
    null

    Hi @EngAzat ,
    I see that you are having issues printing and copying, the print out is incomplete. I would like to help you out today.
    What is the full name and product number of your printer? How Do I Find My Model Number or Product Number?
    The only model I am pulling up for the HP 1518 Inkjet, is this the Laserjet CP1518.
    What operating system are you using? How to Find the Windows Edition and Version on Your Computer.
    How is the printer connected? (USB/Ethernet/Wireless)
    In the mean time, make sure the printer is connected directly to a wall outlet. (don't use a power hub or a surge protector) This ensures the printer is receiving full power and may help this situation.
    Download and run the Print and Scan Doctor. It will diagnose the issue and might automatically resolve it. Find and fix common printer problems using HP diagnostic tools for Windows?
    Have a great weekend!
    Thank You.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
    Gemini02
    I work on behalf of HP

  • Getting less records in TABLE

    Hi All,
    I am executing a BAPI function module in webdynpros.
    My RFC is returning table with 2 records of data.
    also i mapped all the feilds to my TABLE in webdynpro( view).
    But i am getting only one record.
    checking no of recods code is:
    int s=wdContext.nodeTb_Po_Details().size();
    Please suggest me why i am getting only one row of record.
    Thanks,
    Ravi

    Hi ,
    try to do your application once again.
    Suppose u want to display material list.
    bapi :Z_Matnr.
    After importing this u get the Z_Matnr_Input
    u can implement this code in your view.
    In your View init method.u can write the code like this.
    Z_Matnr_Input input = Z_Matnr_Input();
    wdContext.node<Z_Matnr_Input>.bind(input);
    Do u have any input params send here.
    input.set<param>(1);
    input.set<param1>(2);
    Otherwise u do not have params.
    wdContext.current< Z_Matnr_Input()>.modelObject().execute();
    <b>Check this currect out put node as invalidate.
    I think this might be the problem.</b>
    wdContext.node<OutPutNode>.invalidate();
    This will help.
    Thanks
    Lohi.

  • Properties file missing

    Hi,
    I am trying to use MYSQL with J2EESDK 1.4. I have create a simple CMP bean. When i try to deploy i get the following error.
    JDO7001: Cannot find resource com/sun/jdo/spi/persistence/generator/database/MYSQL.properties. I wonder if this is related to
    mysql.properties=dbType\=mysqlin my database.properties file. I am not sure what has to be given but the default pointbase has
    pointbase.properties=dbType\=pointbaseThanx for all your help.

    Hi,
    I was having the exact same problem , and I am using mysql as well.
    the error was:
    JDO7001: Cannot find resource com/sun/jdo/spi/persistence/generator/database/MYSQL.properties.
    I think I have solved my problem. It turned out that I had not saved my CMP mappings (from entity bean to database) .
    open your EAR (or Jar) in the deploy tool.
    click on a jar that contains entity beans and switch to the "general" tab for that bean
    in the main window, open the "META-INF" link... you should see a file called
    "sun-cmp-mappings.xml"
    This file must exist for every jar contaning entity beans - it contains the cmp to database mappings.
    If its not there for a given jar (one that contains entity beans, that is), then look at the general tab for each entity bean and select "sun-specific settings"
    In the window that opens, switch the view pull-down to CMP-DATABASE. This is where you map your cmp feilds to database fields.
    note that you must first map the database schema for your databases (see chapter 22 of the j2ee tutorial) before you can map the feilds, then add the database schema to your JAR that contains the entity beans.
    Be sure to SAVE after you map!

  • How to use XSLT for mapping feild names one by one to array element

    I have a XSLT case to map all the attributes feild name(not value) which has no child to the target, which is array loop.
    I give an sample below.
    source:
    <Items xmlns="http://www.example.org/sample">
    <SourceSystem>SourceSystem2573</SourceSystem>
    <TimeStamp>2010-01-17T20:54:08.234</TimeStamp>
    <Item>
    <ID>2574</ID>
    <Type>2575</Type>
    <Name>2576</Name>
    </Item>
    </Items>
    source XSD like:
         <element name="Items" type="tns:ItemsType"></element>
         <complexType name="ItemsType">
              <sequence>
                   <element name="SourceSystem" type="string" maxOccurs="1"
                        minOccurs="1">
                   </element>
                   <element name="TimeStamp" type="dateTime" maxOccurs="1"
                        minOccurs="1">
                   </element>
                   <element name="Item" type="tns:ItemType"
                        maxOccurs="unbounded" minOccurs="1">
                   </element>
    </sequence>
         </complexType>
    <complexType name="ItemType">
              <sequence>
                   <element name="ID" type="string" maxOccurs="1"
                        minOccurs="1">
                   </element>
                   <element name="Type" type="string" maxOccurs="1"
                        minOccurs="1">
                   </element>
    <element name="Name" type="string" maxOccurs="1"
                        minOccurs="1">
                   </element>
    </sequence>
         </complexType>
    target need to be like:
    <ns1:AttributesCollection>
    <ns1:Attributes>
    <ns1:fieldname>SourceSystem</ns1:fieldname>
    </ns1:Attributes>
    <ns1:Attributes>
    <ns1:fieldname>TimeStamp</ns1:fieldname>
    </ns1:Attributes>
    <ns1:Attributes>
    <ns1:fieldname>ID</ns1:fieldname>
    </ns1:Attributes>
    <ns1:Attributes>
    <ns1:fieldname>Type</ns1:fieldname>
    </ns1:Attributes>
    <ns1:Attributes>
    <ns1:fieldname>Name</ns1:fieldname>
    </ns1:Attributes>
    </ns1:AttributesCollection>
    target XSD:
    <xs:element name="AttributesCollection" type="AttributesCollection"/>
    <xs:complexType name="AttributesCollection">
    <xs:sequence>
    <xs:element name="Attributes" type="Attributes" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="Attributes">
    <xs:sequence>
    <xs:element name="fieldname" minOccurs="0">
    <xs:simpleType>
    <xs:restriction base="xs:string">
    <xs:maxLength value="100"/>
    </xs:restriction>
    </xs:simpleType>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    I know we can use local-name() to get the tag/field name,
    but I have not idea how to get these leaf field names one by one and then mapping to every array elements.
    I tried whole day but no successful
    Does anyone have some idea?
    Thanks very much!
    Keith
    Edited by: user1065212 on 17-Jan-2010 22:50
    Edited by: user1065212 on 17-Jan-2010 22:53
    Edited by: user1065212 on 17-Jan-2010 22:59

    can you paste source xsd and the correct xml output, the current one isn't really valid
    <ID>2574</TotalNumOfItems>

  • Zeroes after decimal getting trimmed after 1:1 mapping of EDIFACT

    hi all
    i have created 1:1 mapping for XML->EDI.
    i took the output from XI and gave it as input.However, in the output of the mapping, the zeroes after decimal are getting deleted.
    for example if the input contain 2500.000, the output of the mapping contains 2500.
    I want 2500.000 in the output.
    Any suggestions as how to overcome the problem?

    hi Kai
    sorry ..i think i didnt convey properly my prob.
    This is what i want.
    My scenario is IDOC->EDIFACT using XI.
    i did a XI graphical mapping from IDOC ->to->EDIFACT_XML.Here for a particular field i got the output as 2500.000.
    Now this EDIFACT_XML is converted into EDIFACt using 1:1 mapping of BIC MD.
    When i give 2500.00 as an input to this 1:1 Mapping of BIC MD, the output contains only 2500.
    I'm not having problem in XI but in BIC MD.
    kindly suggest how i can get decimal output from BIC MD.

Maybe you are looking for