UTF 8 Problem

I am using UTL_FILE to write data into an excel file. The problem is it shows as garbage data ("ä") as there are UTF8 Charcters in that. I have tried using utl_file.fopen_nchar & other nchar functions but it is still not displaying correctly
Database version is 9.2.0.5.0
NLS_CHARACTERSET is UTF8
NLS_NCHAR_CHARACTERSET is UTF8
Thanks in Advance!
Rk

It appears that there was an extra ? at the end of the URL. Try
What's the best way to export UTF8 data into Excel?
instead.
Justin

Similar Messages

  • UTF problem in a legacy library

    Hello,
    I'm new to java world, but have been programming for 10 years already.
    I have a java library, on which we write our servlets which receives MMS messages from the mobile operator.
    The MMS messages are sent as SOAP multipart messages.
    Our library was created once upon a time by a 3rd party and we don't have any access to either source or the company, as they went out of business.
    Library function receives the SOAP message, parses it and gives us the fields in a more OO manner.
    The catch is, it works perfectly as soon as MMS message subject line contains only ASCII characters.
    If there are some other stuff in it, it screws up all the string.
    I've checked the net dump and saw that they were perfect UTF-8 strings, but as we don't have any access to the code, we can't set the encoding of the reader it uses.
    So, how can I restore the original string sent from the garbage created by the library?
    Any ideas?
    Best regards,
    Salih Goncu

    Can you post the bytes before and the characters after? Preferably as 8-bit hex bytes and 16-bit hex characters?
    A good sign is if they match up numerically...

  • Cfexchange and character encoding

    Hi!
    I'm developing an application using the new cfexchange-tags.
    The app outputs a list of calendar events from an MS Exchange 2003
    server. It works nicely, as long as the event subjects don't
    contain any non-english characters. As soon as the subject contains
    for example "umlaut"-characters like ä or ö, the output
    is bogus.
    Should I, for example, make an appointment in Outlook with
    the subject "Kimi Raikkonen" (without umlauts), the subject, when
    retrived with cfexchangecalendar, displays OK:
    {ts '2008-01-19 10:00:00'} Kimi Raikkonen
    Then, if I use the umlaut-characters in the name "Kimi
    Räikkönen", the resulting page reads:
    {ts '2008-01-19 12:00:00'}
    =?iso-8859-1?Q?Kimi_R=E4ikk=F6nen?=
    Ok, I thought, this is the good old ISO/UTF-problem, so I
    changed my Outlook-settings to utf-8 and tried again (with ä
    and ö), ther result was:
    {ts '2008-01-19 14:00:00'}
    =?UTF-8?B?S2ltaSBSw6Rpa2vDtm5lbg==?=
    - I run the ColdFusion8-server on linux, with the Cumulative
    Hot Fix 2 installed
    - I've tried the ExchangeServerLanguage-attribute in the
    cfexchangeconnection tag to no avail
    (This is nothing new. Every distribusion since the first MX
    has had some problems with other character sets than lower ASCII.)
    I would be grateful for any help in this!

    Hi,
    if I use the cfexchangecalendar tag to retrieve calendar data with german umlauts I recive text phrases such as "=?iso-8859-1?B?dGVzdPbk?=".
    If I using the cfexchangemail tag with the same Exchange connection, everything is perfect and now the German umlauts are displayed correctly!
    (CF 9.0.1, Exchange 2007, SBS 2008, IIS7)
    Thanks
    Olaf

  • Problem with UTF-8 japanese on Win XP

    Hi,
    I have a Client/server application with a server coded in C++ and runing on HP workstation and the GUI in java runing in a web context via java webstart mechanism on PC machine with win XP sp2 OS.
    when logging into the HP workstation in japanese SHIFT_JIS and downloading the application japanese Charcters are displayed correctly on my PC.
    However when logging into the HP workstation in japanese with UTF-8 encodage and kaunching the application characters aren't displayed correctly only rectangle charcaters are displayed, when launching the GUI on HP workstation it displayed correctly.
    Can any one help me to resolve this problem.
    Regards,

    Both SHIFT_JIS and UTF-8 are character encodings. SHIFT_JIS encodes a superset of JISX-0208 to double-byte codepoints and JISX-0201 to single byte codepoints. UTF-8 encodes all of Unicode.
    The problem here is that you are relying on "default" encoding for a client-server application. This only works when the default encoding is the same on both ends.
    Go through your code, looking for everywhere where you convert to and from Strings, including reading and writing to IO Streams, and replace the methods you are using with equivalent methods that let you specify an encoding.

  • Problem viewing/updating MySQL-database with utf-8 charset

    System specs:
    Tomcat 5.5.4
    JDK1.5.0
    MySQL 4.1
    Connector/J 3.0.16
    JSTL 1.1
    I am trying to build a guestbook web-app, and want to be able to store swedish characters (available in latin1) and decided to give utf-8 a go since it would support other languages as well. I use a <Resource> in my context-xml to get the connection.
    This simply will not work; when I use the <%@ page contentType="text/html; charset=utf-8"%> declaration, the swedish characters in the raw html-code gets replaced by questionmarks. Select-queries output look ok though, as long as they don't contain the character '�'
    When removing the page-declaration, the queries show only questionmarks where swedish characters should be.
    The same goes for updating my database. Swedish characters get garbled.
    I have tried the following without success:
    *adding &useUnicode=true&characterEncoding=UTF-8 to the Resource-url -- causes my web-app to fail loading.
    *adding a <filter> to the web-apps web.xml as suggested at  http://www.javaworld.com/javaworld/jw-09-2004/jw-0906-unicode_p.html -- caused another web-app error, resulting in it not being loaded.
    *setting the form attribute accept-charset -- resulted in no improvement.
    Right now I'm half desperate and half fed-up. Is this a common problem with MySQL? Should I use another database? Or perhaps it is the Connector/J Driver that's messing things up.
    I'll appreciate any help I get greatly.

    Hello. Maybe not so interesting after a year to try to ask did you ever get a final solution to that problem in command line perspective. OR does somebody else knows solution to this problem.
    Anyway I have similar problem, I use mysql5 with latin1 charset and I can insert to my database letters ��� normally via mysql command line and they show perfectly. But the problem is my web application.
    I can insert all characters to database and retrieve them normally via web app but if i need to use mysql command line for queries it fails because those special letters appear something like ��. I still need to make queries on mysql command line as admin. I have also tried to change different drivers like
    Class.forName("org.gjt.mm.mysql.Driver"); or
    com.mysql.jdbc.Driver etc connector is 3.0.17
    I have also tried to change to utf-8 and then changed mysql def.enc to same... i have used request.setWHATEVER CHARSET.....i'm out of ideas... help?

  • UTF-8 encoding problem in HTTP adapter

    Hi Guys,
    I am facing problem in the UTF-8 multi-byte character conversion.
    Problem:
    I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
    in HTTP configuration i have given XML code as UT-8 in the XI payload manipulation block.
    I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
    I have checked the Xml messages in the Message monitoring in XI, i can able to see the chines charaters in XML files. But in the third party system it is showing as junk characters.
    Can you please any one help me regarding this issue.
    Please let me know if you need more info.
    Regards,
    Srini

    Srinivas
    Can you please go through the SAP Notes 856597 Question No.3 which may resolve your issue? Also have you checked SAP Notes 761608,639882, 666574, 913116, 779981 which might help you.
    ---Satish

  • Utf-8 encoding problem on solaris

    Hello all.
    I am using weblogic 9.2 and I am facing a very weird problem regarding the encoding. I fetch data from the db (informix btw) and I forward data as utf-8 to jsps. I have set up everything succesfully on my web.xml, weblogic.xml and all jsps include the page directive for utf-8. When I deploy my application on windows 2k machine everything goes smooth. But when the deployment happens on a solaris machine my jsps show "?" instead of letters. Has anyone faced this problem before? Could you plz direct me towards a solution because this thing has taken me days and days and I still haven't managed to find a solution
    Thanx in advance
    axel

    Hi,
    Start the app, and hook an Eclipse debug project to it. Check if the enconding problem is while retrieving from the DB or while generating the response. If the issue is on the DB, you may need to define the enconding on the connection (I am not sure what driver you are using, but should be able to check this out.) If the issue is while generating the response, just XML escape every character.
    Regards,
    LG

  • OC4J 9.0.2.0.1 UTF-8 problems

    We were using OC4J 1.0.2.2.1 with default-charset="UTF-8" in global-application.xml and UTF-8 is working fine.
    We recently upgraded to 9.0.2.0.1. The same setting is not working. We tried setting the default-charset in
    application-deployments/**/wev/orion-web.xml also. It did not work.
    Please respond if any one experiencing the same problem and workarounds for it.
    Thanks

    Srinivas,
    Are you seeing this with servlets or jsps or both...
    recently we got a similar problem reported with jsp.
    thanks,
    -Prasad

  • Problem setting Unicode (utf-8) in http header using tomcat

    Hi:
    I am trying to set a file name in utf-8 to http header using the following code:
    response.setContentType("text/html; charset=utf-8");
    response.setHeader("Content-disposition", "attachment; filename=&#35299;&#27770;.zip");
    // I actually has file name in utf-8 here to set to the header, and I know that the name is correctly
    // and I also looked into the response object MimeHeaders object and saw the head is correctly there
    then write the content of zip file using ServletOutputStream.
    The problem I have is that the file name is not displayed correctly when prompted to save or open in the pop up window next. I found out using Fiddler that the request header is wrong:
    Content-disposition: attachment; filename=&#65533;zn&#65533;�.zip
    I am using Tomcat 5.0.28. Any idea how to get this working?
    Thanks in advance!

    You are setting the charset for the content to be UTF-8. (That is why the method is called setContentType.) But HTTP headers are not part of the content and so that has no effect on the header.
    The original specification for HTTP only allowed US-ASCII characters in headers. It is possible that more recent versions have features that allow for non-ASCII header data, but I don't know if that is the case or how you would use those features if they exist.

  • Robohelp HTML, JBoss: UTF-8 Problem

    Hello
    We use Robohelp 8 to create a web help for our JBOSS based project. Unfortunaltely, all files that robohelp creates use UTF-8, but the default of our JBOSS is ISO-8859-1.
    So, if I open the help using Firefox in the web container I receive only "" and the encoding in the brower is set to ISO. IE8 can display the help in a correct way, I suppose this browser recognize the different character encoding.
    The only thing that I can do to let it work in FF is to change the robohelp output html files and add this lines in each(!) htm file of the help
    <?xml version="1.0"?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    To show the special characters in german like ä,ö,ü I have also to remove the charset in the <metatag> of the output files...
    But every time I re-generate the output I have to redo this. Is there some possibility to change the character encoding in my robohelp source code? Unfortunaltely it seams impossible to change this things in masterfiles or robohelp pages in the UI
    Has somebody a workaround?
    Best regards
    Stefan

    Hi, Stefan. Yes, others have seen this problem - specifically with your ISO charset.
    There isn't a fix in RoboHelp8. The solution appears to be to enable UTF-8 on your application server.
    There are several posts about the problem on this forum:
    http://forums.adobe.com/message/2095270#2095270
    http://forums.adobe.com/message/961768#961768
    These two seem most useful, but if you're really technical, you can search for ISO-8859-1 on the Adobe Forums. If you want just RoboHelp posts, enter  "ISO-8859-1 +RoboHelp".
    HTH,
    Elisa

  • More Cf + MySQL 5 + Unicode/UTF-8 Problems

    Here is the problem:
    I am using a MySQL database that store data in Unicode/UTF-8
    (the website/database are in Lao).
    Settings:
    CF 7.0.2
    MySQL 5.0.26
    MySQL Defaults: latin1_swedish_ci collation, latin1 encoding
    Database Defaults: utf8_general_ci collation, utf8 encoding
    These are same on my local computer and on the host
    (HostNexus)
    The only difference is that my CF uses
    mysql-connector-java-3.1.10 driver while the host uses MySQL 3.x
    driver (org.gjt.mm.mysql.Driver class).
    On my local computer everything works just fine, even without
    any extra CF DSN settings (i.e. connection string and/or JDBC URL
    do not need the useUnicode=true&characterEncoding=UTF-8 strings
    added to show Lao text properly).
    On the host, even with the
    useUnicode=true&characterEncoding=UTF-8 added (I have even
    tried adding
    &connectionCollation=utf8_unicode_ci&characterSetResults=utf8
    to the line as well), I only get ??????? instead of Lao text from
    the db.
    The cfm pages have <cfprocessingdirective> and
    <cfcontent> tags set to utf-8 and also have html <meta>
    set to utf-8. ALl static Lao text in cfm pages shows just fine.
    Is this the problem with the MySQL driver used by the host?
    Has anyone encountered this before? Is there some other setting I
    have to emply with the org.gjt.mm.mysql.Driver class on the host?
    Please help!

    Thanks for your reply/comments, Paul!
    I also think it must be the db driver used on the host... I
    just don't understand why the DSN connection string
    (useUnicode=true&characterEncoding=UTF-8 [btw, doesn't really
    matter utf8 or UTF-8 - works with both; I think the proper way
    actually is UTF-8, since that is the encosing's name used in
    Java...]) wouldn't work with it??? I have the hosting tech support
    totally puzzled over this.
    Don't know if you can help any more, but I have added answers
    to your questions in the quoted text below.
    quote:
    Sabaidee wrote:
    > Here is the problem:
    > I am using a MySQL database that store data in
    Unicode/UTF-8 (the
    > website/database are in Lao).
    well that's certainly different.
    I mean, they are in Lao language, not that they are hosted in
    Laos.
    > Database Defaults: utf8_general_ci collation, utf8
    encoding
    how was the data entered? how was it uploaded to the host?
    could the data have
    been corrupted loading or uploading to the host?
    The data was entered locally, then dumped into a .sql file using
    utf8 charset and then the dump imported into the db on the host,
    once again with utf8 charset. I know the data in the database is
    correct: when I browse the db tables with phpMyAdmin, all Lao text
    in the db is displayed in proper Lao...
    > The only difference is that my CF uses
    mysql-connector-java-3.1.10 driver
    > while the host uses MySQL 3.x driver
    (org.gjt.mm.mysql.Driver class).
    and does that driver support mysql 5 and/or unicode?
    I am sure it does support MySQL5, as I have other MySQL5
    databases hosted there and they work fine. I am not sure if it
    supports Unicode, though.... I am actually more and more sure it
    does not... The strange this is, I am not able to find the java
    class that driver is stored in to try and test using it on my local
    server... I have asked the host to email me the .jar file they are
    using, but have not heard back from them yet...
    > On my local computer everything works just fine, even
    without any extra CF DSN
    > settings (i.e. connection string and/or JDBC URL do not
    need the
    > useUnicode=true&characterEncoding=UTF-8 strings
    added to show Lao text
    > properly).
    and what happens if you do use those? what locale for the
    local server?
    If I use just that line, nothing changes (apart from the 2 mysql
    variables which then default to uft8 instead of latin1) -
    everything works just fine locally.
    The only difference I have noticed between MySQL setup on my
    local comp and on the host is that on my comp the
    character_set_results var is not set (shows [empty string]), but on
    the host it is set to latin1. When I set it to latin1 on my local
    comp using &characterSetResults=ISO8859_1 in the JDBC URL
    string, I get exactly same problem as on the host: all ???????
    instead of Lao text from db. If it is not set, or set to utf8,
    everything works just fine.
    For some reason, we are unable to make it work on the host:
    whatever you add to the JDBC URL string or enter in the Connection
    String box in CF Admin is simply ignored...
    Do you know if this is a particular problem of the driver
    used on the host?
    > The cfm pages have <cfprocessingdirective> and
    <cfcontent> tags set to utf-8
    > and also have html <meta> set to utf-8. ALl static
    Lao text in cfm pages
    > shows just fine.
    db driver then.
    I think so too...

  • UTF-16 parsing problem in XI

    Hi! we are having a problem with XML encoding. An external system A sends us a XML message through HTTP. An example of the XML is:
    <?xml version="1.0" encoding="UTF-16"?>
    <Envelope version="01.00">
    </Envelope>
    The system A uses UTF-16 for the encoding. We write a Java mapping to manipulate the data and writes out the data in UTF-8 and use UTF-8 in the header. Then we would do a message mapping. The trace shows that the Java mapping gets executed successfully. However, it throws out a ParserException in the message mapping. This is a copy of the trace log:
    <Trace level="1" type="T">RuntimeException during appliction Java mapping com/sap/xi/tf/_Can_HRXML_to_RFC_Req_</Trace>
      <Trace level="1" type="T">com.sap.aii.utilxi.misc.api.BaseRuntimeException: Fatal Error: com.sap.engine.lib.xml.parser.ParserException: Invalid char #0x0(:main:, row:3, col:0) at com.sap.aii.mappingtool.tf3.Transformer.checkParserException(Transformer.java:41) at com.sap.aii.mappingtool.tf3.Transformer.start(Transformer.java:79) at com.sap.aii.mappingtool.tf3.AMappingProgram.execute(AMappingProgram.java:232) at com.sap.aii.ibrun.server.mapping.JavaMapping.executeStep(JavaMapping.java:63) at com.sap.aii.ibrun.server.mapping.Mapping.execute(Mapping.java:91) at com.sap.aii.ibrun.server.mapping.SequenceMapping.executeStep(SequenceMapping.java:55) at com.sap.aii.ibrun.server.mapping.Mapping.execute(Mapping.java:91) at com.sap.aii.ibrun.server.mapping.MappingHandler.run(MappingHandler.java:78) at com.sap.aii.ibrun.sbeans.mapping.MappingRequestHandler.handleMappingRequest(MappingRequestHandler.java:88) at com.sap.aii.ibrun.sbeans.mapping.MappingRequestHandler.handleRequest(MappingRequestHandler.java:63) at com.sap.aii.ibrun.sbeans.mapping.MappingServiceImpl.processFunction(MappingServiceImpl.java:79) at com.sap.aii.ibrun.sbeans.mapping.MappingServiceObjectImpl0.processFunction(MappingServiceObjectImpl0.java:131) at sun.reflect.GeneratedMethodAccessor294.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:324) at com.sap.engine.services.ejb.session.stateless_sp5.ObjectStubProxyImpl.invoke(ObjectStubProxyImpl.java:187) at $Proxy42.processFunction(Unknown Source) at sun.reflect.GeneratedMethodAccessor568.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:324) at com.sap.engine.services.rfcengine.RFCDefaultRequestHandler.handleRequest(RFCDefaultRequestHandler.java:95) at com.sap.engine.services.rfcengine.RFCJCOServer.handleRequestInternal(RFCJCOServer.java:113) at com.sap.engine.services.rfcengine.RFCJCOServer$ApplicationRunnable.run(RFCJCOServer.java:171) at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37) at java.security.AccessController.doPrivileged(Native Method) at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:95) at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:159)</Trace>
    When I run SXMB_MONI to show the processed XML message, the message from the system A isn’t shown correctly. This is what it says:
    The XML page cannot be displayed
    Switch from current encoding to specified encoding not supported. Error processing resource 'file:///C:/Documents and Setti...
    <?xml version="1.0" encoding="UTF-16"?>
    I tried to use a HTTP post tool to post the XML message, if I changed UTF-16 to UTF-8, the message could be processed successfully.
    How do we resolve this encoding problem? The external system A could change the encoding scheme to UTF-8, but the header is hard-coded, and will be remained to be UTF-16 whatever scheme it uses. Any input will be appreciated.
    Julie

    Hi,
    I have a similar problem.
    Input is xml with UTF8 encoding, output is xml with UTF8 encoding.
    In spite of that simple situation, Transformer returns bad output:
    TransformerFactory transformerFactory = TransformerFactory.newInstance();
    Transformer transformer = transformerFactory.newTransformer();
    DOMSource source = new DOMSource(document);
    StreamResult result = new StreamResult(outputStream);
    /*Properties properties = transformer.getOutputProperties();
    properties.setProperty(OutputKeys.ENCODING, "UTF-8");
    transformer.setOutputProperties(properties);*/ this didn't help
    transformer.transform(source, result);
    //source --> correct in UTF8
    //result --> after transformation is incorrect in UTF8
    And the result ist:
    An invalid character was found in text content. Error processing resource 'file:///C:/Documents and Settings/rlatta/Local S...
    <?xml version="1.0" encoding="utf-8" ?><SDS_XSD_ZPPM_POB><pob>070</pob><skratkaPobocky>SE</...
    Does anybody know some answer?
    p.s. we recently installed support packages SAP_BASIS 13, SAP_ABA 13. Before it worked so far I know.

  • Message Mapping Problem with UTF-16LE Encoded XML

    Hello,
    we have the following scenario:
    IDoc > BPM > HTTP Sync Call > BPM > IDoc
    Resonse message of the HTTP call is a XML file with UTF-16LE processing instruction. This response should then be mapped to a SYSTAT IDoc. However the message mapping fails "...XML Parser: No data allowed here ...".
    So obviously the XML is not considered as well-formed.
    When taking a look at SXMB_MONI the following message appears: "Switch from current encoding to specific encoding not supported.....".
    Strange thing however is if I save the response file as XML and use the same XML file in the test tab message mapping is executed successfully.
    I also tried to use a Java Mapping to switch encodings before executing message mapping, but the error remains.
    Could the problem be, that the codepage UTF-16LE is not installed on the PI system ? Any idea on that ?
    Thank you!
    Edited by: Florian Guppenberger on Feb 2, 2010 2:29 PM
    Edited by: Florian Guppenberger on Feb 2, 2010 2:29 PM

    Hi,
    thank your for your answer.
    This is what I have tried to achieve. I apply the java conversion mapping when receiving the response message - i tried to convert the response to UTF-16, UTF-8 but none of them has helped to solve the problem.
    I guess that using adapter modules is not an option either as it would modify the request message, but not the response, right?

  • UTF-8, Unicode, XML and windows problems

    Hi there,
    I'm developing an application which uses a lot of russian text.
    This russian text is stored in XML and can be sent to a server remotly
    I use the standard javax.xml libaries to parse the xml files and DBunits XMLWriter to write generate XML strings.
    The XML returned by the DBunit stuff is UTF-8, but its inside a UTF-16 string.
    So when I generate xml and print it out I get something that looks like the following:
    ��������� ������������ ���������?����Thats ok, beacause I can stick that streight into a file when writing files and it works.
    But the problem comes when sending the XML over the server.
    The sending implentation I use must be able to send java generated utf-16 and xml read utf-8.
    So I convert the XML from utf-8 to utf-16, using the following:
    byte[] stringBytesISO = isoString.getBytes("ISO-8859-1");
    utf8String = new String(stringBytesISO, "UTF-8");And that works perfectly on my linux system.
    However when I run it on windows, it only seems to convert some of the characters
    &#1055;&#1088;&#1080;&#1074;&#1099;&#1095;&#1085;&#1099;&#1084; &#65533;?&#1085;&#1086;&#1084; &#1079;&#1072;&#65533;?&#1085;&#1091;&#1090; &#1076;&#1086;&#1088;&#1086;&#1075;&#1080; &#1076;&#1086; &#1074;&#1077;&#65533;?&#1085;&#1099;,Does anyone know whats going wrong here?

    jammers1987 wrote:
    I use the standard javax.xml libaries to parse the xml files and DBunits XMLWriter to write generate XML strings.DbUnit is a testing tool; are you saying you're using it in a production system? Ideally, you should use the same library to write the XML as you do to read it, but you definitely shouldn't be using DbUnit in this context.
    The XML returned by the DBunit stuff is UTF-8, but its inside a UTF-16 string. That should never happen. XML is just text, and text can either be in the form of a Java string, or it can be stored externally using a text encoding like UTF-8. Never mind that Java strings use the UTF-16 encoding; you don't need to know or mention that. Encodings only come into play when you're communicating with something outside your program, like a file system or a database.
    When you generate the XML, you specify that the encoding is UTF-8. When you read the XML, you specify that the encoding is UTF-8. That's all.

  • UTF-8 encoded JSPs compilation problem

    Hi,
              I'm using Weblogic 9.0 Beta. I have an XML-format UTF-8 encoded JSP (with the proper encoding declarations). I can see that this is compiled into a UTF-8 Java servlet by WebLogic.
              At the compilation to a class file though, the encoding is corrupted. I guess that the Java compiler is assuming a system-encoded (which would be ISO-8859-1) Java file instead of the actual UTF-8 encoding.
              This problem did not occur with WebLogic 8.1.
              I have tried to explicitly tell the Java compiler to treat the source files as UTF-8 in weblogic.xml, i.e.
              <jsp-param>
              <param-name>compileFlags</param-name>
              <param-value>-encoding UTF8</param-value>
              </jsp-param>
              but that had no effect.
              Anyone else noticed this?
              I assume that correct behaviour is for WebLogic to preserve encoding from JSP to servlet to class file, rather than for me to set encoding in weblogic.xml. Is that correct?
              Is there a workaround?
              Thanks for any help you can offer!

    Solved
    It is about Tomcat's character encoding not about the codes..
    For more info:
    [http://wiki.apache.org/tomcat/Tomcat/UTF-8]

  • HTTP Test Tool Umlaut (Special Character) Problem iso-8859-1 utf-8

    Hi folks,
    I habe a Problem in an HTTP to IDOC Scenario. The configuration works and when I test it, by using the Test Message Tool from the Runtime Workbench i get the following problem:
    I post an IDOC XML Charset iso-8859-1 when it arrive as IDoc in business system german umlauts would be displayd very cryptic
    ä = ä
    ü = ü
    and so on ....
    When I post the XML with UTF-8 charset it works, what can i do to handle this ?
    Thank you

    Hi,
    maybe this document is helpful:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42
    and also this thread:
    Character translation error in Mapping Lookup API (RFC)
    Regards
    Patrick

Maybe you are looking for

  • ICloud is not working Can't get or send mail?

    What is going on with iCloud? Can't get or receive mail. It's not my cable it's an iCloud issue according to the error message. It started this morning.  Help !

  • How can you create a way for someone to download a song on your webpage?

    is there a way to make a button that offers viewers of your webpage to download the media playing?

  • Jsp program on tomcat

    I am having trouble with a simple jsp program on tomcat. I have one Java bean class that the jsp cannot find. It is in the classes directory under WEB-INF. I have tried it with and without my web.xml file and I get the same error. org.apache.jasper.J

  • PSE 12 unusable to me (please read why)

    Here is the whole (sad) story:  I purchased PSE12 about 6 weeks ago and have not been able to use it at all.  PSE 8 and PSE 10 were already installed on my PC when I installed PSE 12.  The 12 is useless to me because all of my photos are only seen an

  • Custom Bonded Warehouse & Import Customs Duties inventorised

    Hi all, i have created a bonded ware house plant and transferring the material from the bonded warehouse to other plant by the following way. ME21N - Create STO (in Main Plant, Supplying Plant as Bonded Warehouse Plant) Here in this PO, maintain all