Reg Unicode-Ebcdic Conversion

can anyone Suggest the Way of Converting
unicode char set to Ebcdic & vice versa.. too..
conversion shld happen within java

I am working on this myself, here is what I have come up with but it is still in testing. I am mainly unsure what dangers there are in the cast convesrionsfrom (ints to chars / chars to ints)
public class EBCDICtoASCIIConverter {
     /** ASCII <=> EBCDIC conversion functions */
     static int[] a2e = {
0, 1, 2, 3, 55, 45, 46, 47, 22, 5, 37, 11, 12, 13, 14, 15,
16, 17, 18, 19, 60, 61, 50, 38, 24, 25, 63, 39, 28, 29, 30, 31,
64, 79,127,123, 91,108, 80,125, 77, 93, 92, 78,107, 96, 75, 97,
240,241,242,243,244,245,246,247,248,249,122, 94, 76,126,110,111,
124,193,194,195,196,197,198,199,200,201,209,210,211,212,213,214,
215,216,217,226,227,228,229,230,231,232,233, 74,224, 90, 95,109,
121,129,130,131,132,133,134,135,136,137,145,146,147,148,149,150,
151,152,153,162,163,164,165,166,167,168,169,192,106,208,161, 7,
32, 33, 34, 35, 36, 21, 6, 23, 40, 41, 42, 43, 44, 9, 10, 27,
48, 49, 26, 51, 52, 53, 54, 8, 56, 57, 58, 59, 4, 20, 62,225,
65, 66, 67, 68, 69, 70, 71, 72, 73, 81, 82, 83, 84, 85, 86, 87,
88, 89, 98, 99,100,101,102,103,104,105,112,113,114,115,116,117,
118,119,120,128,138,139,140,141,142,143,144,154,155,156,157,158,
159,160,170,171,172,173,174,175,176,177,178,179,180,181,182,183,
184,185,186,187,188,189,190,191,202,203,204,205,206,207,218,219,
220,221,222,223,234,235,236,237,238,239,250,251,252,253,254,255
     static int[] e2a = {
0, 1, 2, 3,156, 9,134,127,151,141,142, 11, 12, 13, 14, 15,
16, 17, 18, 19,157,133, 8,135, 24, 25,146,143, 28, 29, 30, 31,
128,129,130,131,132, 10, 23, 27,136,137,138,139,140, 5, 6, 7,
144,145, 22,147,148,149,150, 4,152,153,154,155, 20, 21,158, 26,
32,160,161,162,163,164,165,166,167,168, 91, 46, 60, 40, 43, 33,
38,169,170,171,172,173,174,175,176,177, 93, 36, 42, 41, 59, 94,
45, 47,178,179,180,181,182,183,184,185,124, 44, 37, 95, 62, 63,
186,187,188,189,190,191,192,193,194, 96, 58, 35, 64, 39, 61, 34,
195, 97, 98, 99,100,101,102,103,104,105,196,197,198,199,200,201,
202,106,107,108,109,110,111,112,113,114,203,204,205,206,207,208,
209,126,115,116,117,118,119,120,121,122,210,211,212,213,214,215,
216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,
123, 65, 66, 67, 68, 69, 70, 71, 72, 73,232,233,234,235,236,237,
125, 74, 75, 76, 77, 78, 79, 80, 81, 82,238,239,240,241,242,243,
92,159, 83, 84, 85, 86, 87, 88, 89, 90,244,245,246,247,248,249,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57,250,251,252,253,254,255
     static char ASCIItoEBCDIC(char c)
     int n = (int)c;
               System.out.println("ASCII Character " + c + " is in position " + n);
               System.out.println("EBCDIC Character is in position " + a2e[n]);
     return (char)a2e[n];
     static char EBCDICtoASCII(char c)
     int n = (int)c;
               System.out.println("EBCDIC Character " + c + " is in position " + n);
               System.out.println("ASCII Character is in position " + e2a[n]);
     return (char)e2a[n];
     static String ASCIItoEBCDIC(String s)
          StringBuffer sb = new StringBuffer();
          for (int i=0;i<s.length();i++) {
               char c = s.charAt(i);
     int n = (int)c;
               System.out.println("ASCII Character " + c + " is in position " + n);
               System.out.println("EBCDIC Character is in position " + a2e[n]);
     //return (char)a2e[n];
     sb.append((char)a2e[n]);
return sb.toString();
     static String EBCDICtoASCII(String s)
          StringBuffer sb = new StringBuffer();
          for (int i=0;i<s.length();i++) {
               char c = s.charAt(i);
     int n = (int)c;
               System.out.println("EBCDIC Character " + c + " is in position " + n);
               System.out.println("ASCII Character is in position " + e2a[n]);
     sb.append((char)e2a[n]);
return sb.toString();
     public static void main(String[] args) {
          //char e = ASCIItoEBCDIC((char)0x6b);
          String etext = ASCIItoEBCDIC("Test String");
          System.out.println("" + "Test String" + "=" + etext);
          //char a = EBCDICtoASCII((char)0x92);
          String atext = EBCDICtoASCII(etext);
          System.out.println("" + "Test String" + "=" + atext);
}

Similar Messages

  • EBCDIC conversion

    hi
    my requirement is to create a function module for EBCDIC conversion for -ve quantities. this is for Accounts Payable Invoices Record
    for example a Quantity Received, S9(8),  of -1150 ... will be passed as 0000115}                                             
    For example an A/P Invoice Amount, S9(10)v9(2),  of -592.52 ... will be passed as 00000005925K                                             
    Negative                                        
         0 = }                                        
         1 = J                                        
         2 = K                                        
         3 = L                                        
         4 = M                                        
         5 = N                                        
         6 = O                                        
         7 = P                                        
         8 = Q                                        
         9 = R
    with regards,
    srinath.

    Hi,
    Have you considered using the ABAP command TRANSLATE?
    You can disassemble the value (WRITE into a character structure with 8 + 1 character fields) and then TRANSLATE the second field using the dictionary you wrote in your post.
    TRANSLATE ls_value-last_digit USING '0}1J2K3L4M5N6O7P8Q9R'.
    <<text removed>>
    Regards,
    Ogeday
    Edited by: Matt on Feb 17, 2009 5:56 AM - Please do not ask for points

  • ASCII to EBCDIC conversion

    Hi there!
    I am working on an Oracle data extract project. My output file will send to a DB2 database and a mainframe application. In my file, some fields' type is COMP-3. I use Oracle build-in function convert () to convert to EBCDIC. This works fine in SQL*PLUS. But when I use it in my PL/SQL program, I got "ORA-06502: PL/SQL: numeric or value error: character to number conversion error". Here is my program.
    FUNCTION get_vd_pro_norm_mth_amt(p_account_rec IN ACCOUNT_ROW) return varchar2
    as
    v_scaled_amount               varchar2(15);
    BEGIN
         select convert(a.scaled_amount, 'WE8EBCDIC500','US7ASCII')
         into v_scaled_amount
         from pin61_02.rate_bal_impacts_t a, pin61_02.rate_plan_t b
         where b.poid_id0 = a.obj_id0 and
              b.account_obj_db = p_account_rec.poid_db and
              b.account_obj_id0 = p_account_rec.poid_id0 and
              b.account_obj_type = p_account_rec.poid_type and
              b.account_obj_rev = p_account_rec.poid_type;
         return v_scaled_amount;
    EXCEPTION
    WHEN OTHERS THEN
    RETURN NULL;
    END get_vd_pro_norm_mth_amt;
    I guess the wrong data type of my variable v_scaled_amount generated the problem. I do not know which data type should I use to store EBCDIC data.
    Thanks a lot!
    Max

    try with nvarchar2.
    NVARCHAR2
    You use the NVARCHAR2 datatype to store variable-length Unicode character data. How the data is represented internally depends on the national character set specified when the database was created, which might use a variable-width encoding (UTF8) or a fixed-width encoding (AL16UTF16). Because this type can always accommodate multibyte characters, you can use it to hold any Unicode character data.
    The NVARCHAR2 datatype takes a required parameter that specifies a maximum size in characters. The syntax follows:
    NVARCHAR2(maximum_size)
    Because the physical limit is 32767 bytes, the maximum value you can specify for the length is 32767/2 in the AL16UTF16 encoding, and 32767/3 in the UTF8 encoding.
    You cannot use a symbolic constant or variable to specify the maximum size; you must use an integer literal.
    The maximum size always represents the number of characters, unlike VARCHAR2 which can be specified in either characters or bytes.
    my_string NVARCHAR2(200); -- maximum size is 200 characters
    The maximum width of a NVARCHAR2 database column is 4000 bytes. Therefore, you cannot insert NVARCHAR2 values longer than 4000 bytes into a NVARCHAR2 column.
    You can interchange VARCHAR2 and NVARCHAR2 values in statements and expressions. It is always safe to turn a VARCHAR2 value into an NVARCHAR2 value, but turning an NVARCHAR2 value into a VARCHAR2 value might cause data loss if the character set for the VARCHAR2 value cannot represent all the characters in the NVARCHAR2 value. Such data loss can result in characters that usually look like question marks (?).
    [email protected]
    Joel P�rez

  • Unicode MDMP conversion issue with document management table.

    Hi,
    We are in a process of doing a unicode conversion for our ECC 6.0 MDMP non-unicode system. We have completed the scans and we found close to 14 million words not assigned to the codepages.
    Then we checked at the table level which table has the highest number of words. There is one custom table ZQMDOCS which is used to store documents (MS-word) documents those are test procedures for our labs.
    If we see the MS-word document in non-unicode we are able to get to the document and just did some dummy assignment and completed the import and on our unicode system if we try to open the document it is opening in a readable format.
    So the issue is the data what is being stored in the document is in english but the formating which is done is read as a special character in a unicode system and in SAP it stores raw data.
    Please suggest ways to resolve this issue or any possible workarounds for this. This is a very critical table ( 43,000 documents & close to 14 million words not assign to the code page)
    Thanks
    Junaid.

    Hi Venkat,
    Thanks a lot for your immediate response.
    The InfoObject 0DOC_TYPE was without conversion exit by default. but when data coming from R/3 it is converting and sending to BW So that's why i am planning to use conversion exit "AUART" in the info Object.
    I checked data in R/3 using RSA3 it is showing sales document type as "OR" and for the same transaction data when i checked in PSA it is showing as "TA".
    Could you please let me know if there any other options.
    Thanks in advance,
    Dara.

  • Unicode character conversion

    Hello,
    From external system we receive XML messages in UTF-8. The data are transfered from XI to SAP WAS by RFC adapter. The communication language is set to 'CS' (Czech).  The data are saved into database with no conversion to set code page (1401). The receiving system is not Unicode compatible.
    Into database are writen unicode chars (e.g. &#x159;) instead of single chars in Czech alphabet.
    Is here any way how to force a XI to make a character conversion ?
    Thanks for any feedback.
    Marian Morzol

    Pl identify your database characterset
    SQL> select * from NLS_DATABASE_PARAMETERS;If the NLS_CHARACTERSET is WE8ISO8859P1, it is not capable of storing the Euro symbol (pl do a google search to find various references).
    To store the Euro symbol, you will most likely need to change the database characterset to UTF8 - pl see the MOS Docs mentioned in this thread for details - Adding Greek & German language to R12
    HTH
    Srini

  • Unicode Type Conversion Error

    Hi Friend,
    I am working in UNICODE project,i need one help,
    I have one error .
    Actually, im using one structure(Z0028) and passing values to internal table.
    At that time i shows one error.
    Actually,this error is due to type conversion problem.
    In that structure,i ve one packed datatype ,so, if i select
    unicode check it shows error.
    I will sent example prg and error also.
    Please give some solution to slove.
    REPORT  YPRG1                                   .
    TABLES: Z0028.
    DATA:I_Z0028 TYPE Z0028 OCCURS 0 WITH HEADER LINE .
    SELECT * FROM Z0028 INTO TABLE I_Z0028 .
    IF SY-SUBRC <> 0 .
      WRITE:/ ' NO DATA'.
    ENDIF.
      LOOP AT I_Z0028.
        WRITE:/ I_Z0028.
      ENDLOOP.
    Regards,
    Kalidas.T

    Hi,
    Display fields
    do like this..
    REPORT YPRG1 .
    TABLES: Z0028.
    DATA:I_Z0028 TYPE Z0028 OCCURS 0 WITH HEADER LINE .
    SELECT * FROM Z0028 INTO TABLE I_Z0028 .
    IF SY-SUBRC 0 .
    WRITE:/ ' NO DATA'.
    ENDIF.
    LOOP AT I_Z0028.
    WRITE:/ I_Z0028-field1,
                  I_Z0028-field2
    I_Z0028-field3.
    ENDLOOP.
    Regards,
    Prashant

  • Unicode test conversion - skip some steps?

    Hi all,
    We will be upgrading our system from R/3 4.7x200 to ERP2005, and we have recently successfully upgraded our sandbox R/3 4.7x200 system to ERP2005, all pretty much without too much of a hassle. Now we are planning to do a unicode conversion on the same sandbox system, to get some idea about the runtime of such a conversion. The problem is that we are short on time, and I wonder what the effects will be if I skip some of the steps that should be done before we begin with the actual conversion phase (R3load). I have done some of the pre-conversion steps, but not all, and I wonder if there are some that I absolutely MUST do in order to get through the actual conversion?
    In the end, we are not looking for a functioning system after this test conversion, we are only interested in the runtime and downtime findings.
    Regards,
    Thomas

    Hi,
    You cannot skip same conversion step.
    the good ideas is to use the released method by SAP to do upgrade/conversion at the same time. this will save you time.
    check OSS note
    Note 928729 - Combined Upgrade & Unicode Conversion FAQ
    Good Luck

  • REG: File content conversion in Receiver file adapter

    HI Gurus,
    I have the scenario like proxy to file with content conversion
    my message type for the source like this
    MT_SOURCE
    -----||
           |---ROW
    ------------||
                    |---LINE
    target strucuture also same
    MT_TARGET
    -----||
           |---ROW
    ------------||
                    |---LINE
    i am using the following FCC parameters in receiver file adapter
    LINE.fieldSeparator = '                       '
    LINE.fixedLineWidth = 90
    LINE.lineSeparator  = 'nl'
    LINE.fieldFixedLengths = 200
    LINE.endSeparator        = 'nl'
    the output file show the data differently in note pad when compare to word
    word output
    dgepvs                       023456987
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01                                                                               
    kgdd0016155710                SS Smw Ne 01
    notepad out put
    dgepvs           023456987   kgdd0016155710                SS Smw Ne 01       kgdd0016155710                SS Smw Ne 01          kgdd0016155710                SS Smw Ne 01            kgdd0016155710                SS Smw Ne 01    kgdd0016155710                SS Smw Ne 01
    word output is the correct output wat i am expect but same out put i want in note pad can any help me out for this how can i resolve this
    Thanks in advance

    There is nothing wrong in your content conversion parameters. This is pure editors interpretation for the next line character.
    I would do in java program to fix this issue using '\r\n'   You might want to try and see how that helps for notepad editor.

  • Reg: File Content Conversion Grand Child

    hi experts
    i am designing a scenario for FILE- FILE. i have to do the file content conversion for which have a structure with a grand child . how i have to give the record set structure.
    with regards
    suman.

    Hi suman,
    could you pls give some more clarity on the exact issue you are facing with..
    From my understanding, you have an input file containing grand child(3 level).
    As far as i know, fcc is not possible if the structure contains more than 2 levels.
    or,
    if you want the target structure to contain a grand child ,then you can refer to the below blog.
    /people/riyaz.sayyad/blog/2008/05/20/xipi-convert-flat-file-to-deeply-nested-xml-structures-using-only-graphical-mapping
    Regards,
    Swetha.

  • Reg:File content conversion for Sender File Adaptor

    Hi all,
                i would like to know , how the file content conversion is written for the below mentioned XML code. The flat file will have only the table name and the fieldname
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:Mt_File xmlns:ns0="http:/file_to_idoc">
       <query_table>ztable1</query_table>
       <row_count/>
       <Fields>
          <item>
             <fieldname>zempno</fieldname>
          </item>
       </Fields>
    </ns0:Mt_File>
    regards
    priya

    First you create the conversion rules, then you create the XML structure accordingly, not the other way.
    online help will help you

  • Reg : date format conversion from dd.mm.yyyy to mmddyyyy

    hi ALL,
    is there any function module which can convert date format
    from <b>dd.mm.yyyy to mmddyyyy</b>.
    Thanks in advance

    Hi,
    Please check the following
    CONVERSION_EXIT_PDATE_INPUT Conversion Exit for Domain GBDAT: DD/MM/YYYY -> YYYYMMDD
    CONVERSION_EXIT_PDATE_OUTPUT Conversion Exit for Domain GBDAT: YYYYMMDD -> DD/MM/YYYY
    SCA1 Date: Conversion
    CONVERSION_EXIT_IDATE_INPUT External date INPUT conversion exit (e.g. 01JAN1994)
    CONVERSION_EXIT_IDATE_OUTPUT External date OUTPUT conversion exit (e.g. 01JAN1994)
    CONVERSION_EXIT_LDATE_OUTPUT Internal date OUTPUT conversion exit (e.g. YYYYMMDD)
    CONVERSION_EXIT_SDATE_INPUT External date (e.g. 01.JAN.1994) INPUT conversion exit
    CONVERSION_EXIT_SDATE_OUTPUT Internal date OUTPUT conversion exit (e.g. YYYYMMDD)
    TB01_ADDON
    CONVERSION_EXIT_DATEX_INPUT
    CONVERSION_EXIT_DATEX_OUTPUT
    Hope this would surely help you out.
    Thanks and regards,
    Varun.

  • Reg: FILE CONTENT CONVERSION

    Hi,
    Is any one explain me about File content conversion in SAP XI and how to understand the requirements.Please tell me the procedure how to do it.
    With Regards,
    Kiran.
    Edited by: ravikiran123 on Sep 6, 2010 8:37 AM

    Hi,
    Refer the below links:
    http://saptechnical .com/Tutorials/XI/Contentconversion/page1.htm
    http://help.sap.com/saphelp_srm40/helpdata/en/bc/bb79d6061007419a081e58cbeaaf28/content.htm
    http://help.sap.com/saphelp_srm40/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/frameset.htm
    Search in SDN, before posting...........

  • ASCII-EBCDIC conversion

    Hello,
    I am looking for a tool which can convert from ASCII to EBCDIC (firstly) and
    from EBCDIC to ASCII.
    (I should store some columns in EBCDIC because some COBOL program should read it.)
    Regards,
    Laszlo

    Thank you !
    But how can i use the other character sets instead of the common ?
    May I load into the database ?
    For example:
    create table OIT015
    ENTSTAMP TIMESTAMP(6) not null,
    ORDNR NUMBER(11) not null,
    ORDERDATEN VARCHAR2(3800) not null
    tablespace PLINK_HBCI
    pctfree 10
    pctused 40
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    insert into OIT015 (ENTSTAMP, ORDNR, ORDERDATEN)
    values (to_timestamp('11-11-0011 11:11:11.000011', 'dd-mm-yyyy hh24:mi:ss.ff'), 2, 'X');
    update oit015 set orderdaten=convert('00000396375160','WE8EBCDIC500') where ordnr=2
    commit;
    My collegues wrote the same data with COBOL program into test table in EBCIDIC.
    There is the result:
    select dump(t.orderdaten,16), t.* from oit015 t
    0,0,3,96,37,51,60,c     11.11.11 11:11:11.000011     1111111111     
    f0,f0,f0,f0,f0,f3,f9,f6,f3,f7,f5,f1,f6,f0     11.11.11 11:11:11.000011     2     ?????óuöó÷onö?
    My collegues said that 'WE8EBCDIC500' is special display format on mainframe.

  • Reg : Exchange rate conversion

    Hi all ,
    I am using the function module 'READ_EXCHANGE_RATE' to find the exchange rate .
    I am using the below parameters ..
    date = sy-datum,
    FCURR = 'CAD',
    TCURR = 'USD'.
    I am getting the exchange rate as '1.496' is coming from TCURRtable.
    But for this CAD to USD i have 5 entries in table with different Exchange rate based on the Exchange rate type. In this case what needs to be done to get the correct exchange rate?
    Thanks,
    Krishna.

    Type of rate M=Average rate G=Bank buying rate B=bank sellin
    For conversion we use type 'M'.
    Regards,
    Amarjit

  • Reg: Unicode- Coverage Analyzer- (SCOV)

    Hi
    Can any one tell me how we can use the Coverage Analyzer (Transaction SCOV)as a tool for Unicode enabling. What are the sequence of steps that are to be followed.
    Any Information, Documentation, Links, or examples that elucidates  the same are highly solicited.
    Thank you in anticipation.
    Regards,
    Sravan.

    Hi,
    I haven't used SCOV much.
    In SCOV, 'global view' gives you a set of figures and graphical representation of system performance data. One of these graphs show the unicode-percentange of all programs that are unicode-enabled(one axis is time (month) and other axis represents percentage of programs that are unicode enabled). This is just to see how many programs have been unicode-enabled in the system. As long as you are not 100%, there is work to be done.
    To force unicode-enabling (there may be programs which are unicode compliant but have not been set to unicode), you can check the profile parameter abap/unicode_check to enforce unicode syntax checks. (This has nothing to do with SCOV per se).
    Now, how best to use SCOV when an object is being unicode-enabled and you need to make sure it is working fine?
    After unicode-enabling the program you need to test it to make sure it works. But after running one or more tests successfully, how do you know whether all lines of code in the program have been covered in the tests?
    SCOV can give you information whether all blocks of the code has been tested or not. If all blocks of a program/object have been tested after unicode-enabling (as reported in SCOV) then you can assume your testing to be complete. If on the other hand some blocks (let us say cetain subroutines in the program) have not been executed,  you can see this in the detailed view in SCOV, and accordingly run other tests to ensure that this code is tested.
    I am sorry I can not give you a step by step for this, if you post your mail-id, I can probably mail you a document which talks about this in a little (not much) detail.
    cheers,

Maybe you are looking for

  • Graphic Design Inspiration

    Got caught in a PS design lapse. Looking for websites or misc. sources that offer inspiration/ideas. Currently designing celebrity trivia slides as shown in previews before the feature presentation of a movie. Any and all suggestions would be appreci

  • Can I install firefox 4 on my MacBook OS 10.4.11? I need it for a course!

    I'm taking an online course, and need a more current browser on my Mac. I currently use Safari, but Firefox has been recommended. If 4 won't work with my OS, can I download an earlier version?

  • Sorting a JSF dataTable

    Hi all, How can the contents of JSF dataTable component be sorted (any column). I need solution in JSF. can any one show an example or code snippet. I have this code of dataTable and I want to sort it by Date Created. How I can associate my code of s

  • Clusterware and SOA Suite

    I have two servers I will be using for load balancing/failover. I want to know if I can use Oracle Clusterware prior to installation of the SOA suite to perform this job. Both are RHEL 4 servers. Any information on the best way to handle this would b

  • Which Infotype contains the information on High Potencial employees (in PD)

    Hello, is there any infotype in PA/PD/talent management that contains the information on "High Potencial"/"High Performing" employees? Thank You & Regards Raghu Kolukuluri