Cyrllic characters getting converted to special characters.

I am facing a problem when cyrillic character is being sent from JSP to servlet ,its getting converted into some other special character.I am using request.setCharacterEncoding("UTF-8") on servlet.And on jsp I am using <%@page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8"%>.I am facing this problem both in JBoss and Websphere application server.
Please help me in solving this problem.

How do you send the characters - using POST?
I have tried both ie get and post.....
Can you post the code in your servlet that you use to retrieve and display the Cyrillic characters in question?
Just using request.getParameter() to get the value...It is showing question marks ie ??????
And finally, can you post an example of a Cyrillic string and the corresponding incorrect string you see in the servlet?
Can post cyrllic characters here...:-) But I am giving normal characters only...
It started working in websphere when I changed the jvm settiings ie I gave encoding scheme to jvm also.But still it is not working for JBoss4.0.2.
Any help will be highly appreciated.

Similar Messages

  • Problem with special characters getting converted into '#'  in portal

    Hi All,
    The special characters like " - ", " ' ", " ` ",  are getting converted into '#' when the smartform is displayed in portal.
    I have a smart form in which the text contains the above special characters and they are getting displayed as  ' # ', when the form is invoked in portal. We are not able to trace the problem.
    I need your input on this ASAP.
    Thanks in Advance,
    Sowmya

    This could be caused by conversion routine on the field.
    in sapscript, can use &FIELD-NAME(K)& to ignore conversion routine
    To see this and other options, in text edit of script select menu option Insert>Symbols>New  and then type field name and select desired check box options - field will be placed in script with relevant formatting characters
    Andrew

  • Characters like ƒ gets converted to ¿ on insert to table

    characters like ƒ gets converted to ¿. What should be done in order to support special characters?
    I am using Oracle 10g

    You need to check the database character set. If it is US7ASCII for example then there is a very restricted character set.
    You can run a query like select * from nls_database_parameters
    to see all nls (national language support) paramaters. have a look at the nls_characterset.
    You should also read the Globalization Guide for your Oracle level to see what characters are supported and what options you have

  • Convert HTML special characters to String

    Hi,
    I'm looking for an easy way to convert HTML special characters (like "&#246") to String.
    Any Ideas how implement this? Thanks

    Hi,
    I'm looking for an easy way to convert HTML special
    characters (like "�") to String.
    Any Ideas how implement this? Thankswell im assuming that you mean that when you are working with java you will be getting a ? instead of the character you want. assuming that you are reading and writing to this html file. if not well maybe this can stil help : BufferedReader inFile = new BufferedReader(new InputStreamReader(new FileInputStream(fileName),"ISO8859-1"));
    PrintWriter outFile = new PrintWriter(new OutputStreamWriter(new FileOutputStream(fileName2), "ISO8859-1")); basically you need to make sure that ou define ISO8859-1 as the encoding.

  • Converting some special characters NON ASCI to ASCI

    HI Gurus,
    My Users copied some text from some other file and pasted in SAP in Infotype hrp1000 in a Text field. In the frontend of the SAP screen , the text looks good as English text.
    My program is retrieving that text and sending to AL11. If i open and see the file in AL11, the text looks like 'E'#$% ' some thing like this having some cap symbols on the top of the each character( we cannot type those symbols from my PC).
    I developed a program to cleanup my file from NON ASCII to ASCII by using methods in class CL_ABAP_CONV_IN_CE->create,but it is failing to convert into ASCII. If it is not possible to convert that special symbols, atleast i have to remove it from the file ..
    Please help me with solution.
    Thanks,
    Regards,
    Akila

    HI,
       Foound this code... Try this....
    Data: Begin of w_char,
                Ascii type x,
          End of w_char.
    Move 'A' to w_char.
    Write :/ w_char-Ascii.
    Move 66 to w_char-Ascii.
    Write :/ w_char.
    Regards
    GK.

  • Pevent  ASCII like • from getting coverted to special chars in Java

    Hi
    How can I prevent some ascii codes like '&#149;' from being converted to special characters like '?' in Java.
    This happens when I try to retrieve a text mesage having an ascii code from my previous page and display it on the browser.
    Can anyone of you tell me how to prevent this..?
    Thanks,
    Ravi

    Hi Gil,
    Thanks for your suggestion.
    Is there something different I need to do to read the characters correctly.
    My input html format text file has '&#149;' ascii codes in them which should actually display as a bullet on the browser.
    My java program takes this input message as a string and sets it in a java bean.I am trying to display the same bean information in the next page where I see a '?'
    instead of a bullet.Strangely when I view the source in the browser, I see it has
    converted from '&#149;' to '.' (a bullet character) in my 2nd page..I think this is the reason for appearing as '?' character.
    My input html format text file the folowing header info
    '<meta http-equiv="Content-Type" content="text/html; charset=utf-8">'
    Let me know if I need to do some encoding or escaping the ascii codes to prevent it from not getting converted to '.' in the source or '?' on the browser.
    Thanks,Ravi

  • LSMW: Convertion of special character turn into #

    Hi all,
    Ich am Ulpoading a file in the step 'READ DATA' in LSMW and when I check the result in the next step 'DISPLAY READ DATA' I see that the upload (i do not know if LSMW make an WS_UPLAOD or GUI_UPLOAD or another function) converts a special character that we use in Portugal
    in the file:           Fabricação
    in LSMW after Upload:  Fabrica##o
    The file was an Excel which I saved as txt (tab delimited)
    Who can help??????
    Thanks
    Picasso

    Hi Picasso,
    LSMW is having 'GUI_UPLOAD' function module to upload data.
    GUI_UPLOAD FM is having import parameter CODEPAGE.Whose functinality is to take care of such special character.
    Include which is called for READ DATA step, you can very wel find out that the codepage is not suitable for your Portuguese language.
    Do one thing write an BDC...Call GUI_UPLOAD FM and then Pass CODEPAGE number.
    codepage number you will get from  tcp00 table.
    here is example to take care of special Japanese character:
    CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename                = l_ws_path
          filetype                = 'ASC'
          has_field_separator     = '#'
          codepage                = '6300'
        TABLES
          data_tab                = i_aenr
        EXCEPTIONS
          file_open_error         = 1
          file_read_error         = 2
          no_batch                = 3
          gui_refuse_filetransfer = 4
          invalid_type            = 5
          no_authority            = 6
          unknown_error           = 7
          bad_data_format         = 8
          header_not_allowed      = 9
          separator_not_allowed   = 10
          header_too_long         = 11
          unknown_dp_error        = 12
          access_denied           = 13
          dp_out_of_memory        = 14
          disk_full               = 15
          dp_timeout              = 16
          OTHERS                  = 17.

  • Files get converted to temporary files

    hi !!
    i am having an odd problem, whenever i save my captivate work
    and close it the file automatically gets converted to a temporary
    file. Even if i rename it back to .cp and save the file all over
    again and close captivate the file again gets converted to .tmp. I
    am using Captivate 3 on Windows XP service pack 2. Please help

    It sounds as if you downloaded Adobe Acrobat Pro. If you did, uninstall it. Then repair Adobe Reader.
    The free Acrobat account has no connection to any of this.

  • Excise invoice not getting converted into INR from USD for an IMPORT PO

    Hi All
    I have posted one Import PO. Did MIRO for custom and CVD. MIRO doc was in USD which is getting converted into INR as per the exchange rate maintained in OB08.
    Then I did MIGO with the ref of comm invoice of MIRO. Now MIGO doc is posted in INR as per exchange rate maintained in OB08 for custome duty and Freight and base value.
    For Excise invoice , CVD is getting captured but  in excise tab,  base amount is showing in INR  and currency also shows INR which is correct.
    Total Basic duty (BED) , Cess and Hecess iamount s coming in USD but curency is showing INR. After posting the doc also it has not getting converted into INR and the same USD amounts got posted in Excise GLs and currency still showing INR.
    For example:--
    PO value shows
    Base Price -  2040.20   USD
    Custome   -     204.20    USD
    CVD         -      179.54    USD
    Cess on CVD      3.59    USD
    H Cess on CVd  22.44  USD
    Now in excise invoice in Excise item tab
    Base value - 99960 INR (converted at teh rate of 49 , exch rate type M)
    BED -   179.54  INR
    CESS  -  3.59  INR
    HECESS 22.44 INR 
    Here only currency is changed to INR but amounts are still in USD. After posting the doc also same result is coming, hence Registers are getting updated with wrong values.
    Can any one suggest what to do to correct this.
    Regds
    Mukta
    Customization for CVD maintained in Excise config -- company code setting-  exchange rate type for CVD is M.
    OB08 maintained for USD to INR for the current date.

    HI Tej
    All the Import cycle has created three documents.
    1. MIRO doc for custom duties and CVD .--   MIRO posted in USD .  I can see the doc in both currency -  INr and USD via accounting doc - display currency.  I do not have any issue.
    2. MIGO doc - for custom duty and Freight -   MIGO posted in INR , I can see accounting doc in USD and INR both via accounting document-  Display currency and even in the MIGO -- purchase order history tab.
    3. Excise Invoice - for CVD -  EX Invoice posted in diifferent way --
    In this document please note amounts for CVD is as per USD rate , but currency showing is INR , which is a mismatch.
    in Excise customization for company code setting - exchange rate type M is maintained. OB08 is also maintained for USD/INR
    Condition type JCV1 is also marked with currency conversion and accruals.
    Is there any patch or note I need to apply to getting this converted , or any process issue.
    Regds
    Mukta

  • PR to PO conversion error - Partial line items not getting converted

    Hi All,
    Automatic PR to PO conversion ( ME59N ) is not happening for few line items.
    For the same Article,site, vendor combinations,its getting converted on few days and on few days its not.
    No changes were made to the any.
    1. Source List is created for the Article & Vendor combination. Validity is fine. Fixed vendor option is checked.
    2. Info record is maintained.
    3. Auto PO option is checked in Vendor Master
    4. No release Strategy
    Will the Purchase Group have any impact in the conversion?
    Kindly share your views.
    Thanks in adance.
    Regards,
    Bhaskar

    It's going to be a data selection problem.  If it was a problem with conversion you would see something recorded in the log (assuming you have the log parameter set to show all messages).
    Make sure the Auto PO flags are set in the correct purchasing org and site.  Does your requisition have a purcahsing org?
    What selection critera do you have set in ME59N?  You mention that that the problem only occurs on certain days so it could be related to the release date.
    The only other thing I could suggest is authorisation checks.  Are you approved for the correct activity types (the activity for ME59N is not create) , plants, purchasing groups etc.
    Also keep in mind everything I have said may not be relevant to the IS-Retail solution you are using.  I can't be certain if there are any differences in this area when you are running that solution.  It might be worth asking the question in that forum http://scn.sap.com/community/retail

  • How did my pdf files get converted from 'open with Adobe Reader' to open with Adobe Acobat'?  And if I have a ''free'' Acrobat account why does it not open?  When I do click on the account it ask me to pay $89.99.  I never wanted Acrobat.  How can I get -

    How did my stored files get converted from 'open with Adobe READER' to 'open with Adobe ACROBAT'? How can I get them re-set to open with 'Adobe Reader'?
    Please reply to my e-mail:   [email protected]

    It sounds as if you downloaded Adobe Acrobat Pro. If you did, uninstall it. Then repair Adobe Reader.
    The free Acrobat account has no connection to any of this.

  • Inline Text Boxes getting converted to inline tags on text reflow in parent

    Hi,
    We extract text from a text box in a Indesign CS3 document. The text box can have inline text boxes. The text of the inline boxes is extracted as well.
    The text extracted from all the text frames is processed in an xml and processed. After processing, it is reflown into the boxes.
    When we try to import the text in the parent text frame having inline boxes, the inline text boxes are converted to inline tags.
    Can we keep the inline text boxes as it is rather than they getting converted to the inline tags.
    Thanks.

    Hello Harbs,
    Thanks for the reply. By recreating the boxes, do you mean to say that we draw the boxes again and place them in the main text frame as an inline box.
    But the problem again is that if we attach the xml element of the parent element to the text frame, it will flow all the contents including the inline boxes which will be again convert them to inline tags.
    Or is it that we flow the parent text and then look for all the inline tags and draw boxes in place of these tags. But the question here is how do we differentiate between an genuine inline tag and the one which is converted from an inline box.
    Can you help in describing the solution in your mind.
    Thanks

  • Mangal Fonts and Tunga Fonts in (MS Home and Student 2010) Word 2010 not getting converted into PDF

    Why do I get a totally blank PDF file whenever I try to convert any (Microsoft Office Home and Student 2010, Version: 14.0.6029.1000) Word document file containing either Mangal Fonts (Hindi) and Tunga Fonts (Kannada) into a PDF file (by using Adobe Acrobat X Standard - English, Version: 10.1.4, Program File Size: 1790 MB, Complete/Full Version) after formatting and reinstalling all the necessary software / softwares (Microsoft Windows XP, Home Edition, Version 2002, Service Pack 3 and Windows XP Service Pack 3, Version: 20080414-031525, Program File Size: 9.08 MB)?
    I'm using
    01) Microsoft Windows XP, Home Edition, Version 2002, Service Pack 3
    02) Computer: Intel(R) Pentium(R) 4 CPU 3.00 GHz 3.00 GHz, 504 MB of RAM
    03) Windows XP Service Pack 3, Version: 20080414-031525, Program File Size: 9.08 MB
    04) Microsoft Office Home and Student 2010, Version: 14.0.6029.1000, Product ID: 82503-388-0792296-26607, Program File Size: 490 MB
    05) Adobe Acrobat X Standard - English, Version: 10.1.4, Program File Size: 1790.00 MB, Complete/Full/All Version/Features has/have been installed
    06) Windows Internet Explorer 8, Version: 20090308.140743, Program File Size: 4.20 MB
    Any Word 2010 document file that has been converted into any PDF File unfortunately does not show either the Mangal (Hindi) Fonts or the Tunga (Kannada) Fonts.
    The PDF file will unfortunately become/remain totally/fully/completely blank.
    When I convert an Excel Workbook having the Tunga Fonts (Kannada) to/into a PDF File, some of the Tunga fonts in the PDF file are legible and some are unfortunately illegible (get superimposed / overwritten on each other).
    Perhaps I may get the same result if I try to convert an Excel Workbook having the Mangal Fonts (Hindi) to/into a PDF File with some Mangal fonts being/becoming legible and the rest being/becoming illegible (getting superimposed / overwritten on each other).
    Tanveer
    Syed Tanveeruddin, Karnataka, India
    Message was edited by: tanveerindian
    Message was edited by: tanveerindian
    Message was edited by: tanveerindian

    My chronological sequence of steps to create a Word 2010 Document or an Excel 2010 Workbook:
    I first create a Word Document File or an Excel Workbook File and save it by giving an appropriate file name.
    Then I right click on the closed Word Document File or the Excel Workbook File in the file folder and convert it into a PDF by retaining the same/original file name.
    The font list of Microsoft Word 2010 claims/says/states that both Mangal and Tunga are True Type fonts and also that these fonts will/would be used on both printer and screen.
    In case if there is any text content in English in the beginning it will get converted into PDF only till/until either a Mangal(Hindi)/Tunga(Kannada) Font starts.
    The remaining English text content does not get converted into/to PDF the moment a Mangal(Hindi)/Tunga(Kannada) Font starts. If the entire matter is purely (100 per cent) in English text it will fully / completely / totally get converted into a PDF file without any problem.
    Message was edited by: tanveerindian

  • File not getting converted to the .EXT type

    Hello Experts,
    Please help me with an issue ...
    Actually Iam doing a mailing program.... in which a newly generated invoices are sent to user....
    the mailing is working fine.
    The file which the user is getting in his mail box should be a .EXT type file....from where he will download the file and upload it in  the airtel portal..... from where it been sms to the different number's present in that file......
    But the problem is the file is not getting converted to .EXT type and when we are trying to upload it in airtel portal it is saying that file is not in UTF-8 format..
    To check whether conversion to file type is working  when Iam trying to convert the file in .TXT type its getting converted to text format...but not working with .EXT type when passing 'EXT' in doc_type   of FM .
    I checked the SCOT transaction if any setting exist but ...didnt found anything useful ...
    Please help me to resolve the isuue

    Hi joel,
    From this thread
    Shared Files - problems after upgrading APEX 3.0 to 3.1
    i have taken your words,
    In my development of the browser cache support for static files, I did notice some unexplainable behavior with IE and Firefox. When uploading static files and then testing them, it almost seemed like the browser would request the resource only some of the time (and then get the HTTP 304). If an Expires tag is not computed, I know Firefox will compute one itself.
    As per my understanding, Is it cache_expire should be calculated?
    I could not get that line.Could you please explain me.
    Thanks in advance.
    bye
    Srikavi.

  • Blank field getting converted into wrong date

    Dear All,
    I have one date field in SAP table in which there are some entries and some blank entries.
    when Proxy is sending this data from ECC blank entries are converted into 00000000 and through XI in file
    this
    00000000 entry for date getting converted into wrong date like 30.11.0002 in file.
    I have taken data type as string do I need to take data type as date.
    My doubt is if there is no entry in table why it is convertng to  00000000  and after passing from XI why it is converting into
    30.11.0002 .
    Regards

    Hi ,
    My date format like this
    source --yyyyMMDD
    target-DD.MM.YYYY
    so if
    source is
    00000000
    should comes like this
    target
    00.00.0000
    but it is converting into
    30.11.0002
    every where in file values for 00000000 is coming like 30.11.0002
    Regards

Maybe you are looking for