Character Encoding Menu unresponsive

I've updated the app today to the newest version. When I tried changing the character encoding on a website that'S not displaying properly, I found that selecting a different encoding did nothing except check the circle next to it.
The only thing I could do from there was to select cancel or push return on my phone. It's getting a bit annoying to surf on a few websites now since I can't fix the encoding.

Thanks for the report. Looks like this slipped in for Firefox 29 unfortunately. I will file a bug.

Similar Messages

  • Detecting character encoding?

    I'm running into a problem with Safari (I've got Safari 4 beta installed, but I think this was a problem with 3 as well) where it won't automatically detect character encodings. These are sites in Japanese which load and display correctly in Firefox 2, but in Safari they come up as gibberish. When I select Shift JIS they show correctly in Safari. Other pages come up as expected. I must emphasise this: the pages show correctly in Firefox 2 without having to do anything, but in Safari 4 they come up incorrectly and I must change the encoding manually.
    The pages in question are here: http://genki.japantimes.co.jp/self/kanji.en.html
    Is there some way to force Safari to display these pages correctly without having to manually select the correct encoding from the Character Encoding menu? Have I missed an option somewhere?

    The pages in question are here: http://genki.japantimes.co.jp/self/kanji.en.html
    The authors of these pages have failed to put in the html code required by international standards so that browsers will know the text is in Japanese. You should ask them to do this -- leaving it out does not make them look very smart.
    While FireFox has a system which can sometimes detect encodings automatically, Safari does not. When a page lacks the required html code, it will use the default encoding set in its preferences/appearance. So if you set that to Shift JIS, these pages should come up correctly. This could make other pages which lack the correct html display incorrectly, but they should be very rare these days.

  • How to change character encoding in firefox 4 without enabling menu bar

    Occasionally I need to try various character encoding so the characters look correctly. However, with the new Firefox 4 interface, the only place I can find to do that is to re-enable the old style Menu Bar then select View->Character Encoding

    You can find the Character Encoding in the Firefox > Web Developer sub menu.

  • Where is the Character Encoding option in 29.0.1?

    After the new layout change, the developer menu don't have Character Encoding options anymore where the hell is it? It's driving me mad...

    You can also this access the Character Encoding via the View menu (Alt+V C)
    See also:
    *Options > Content > Fonts & Colors > Advanced > Character Encoding for Legacy Content

  • Wrong character encoding from flash to mysql

    Hi, im experiencing problems with character encoding not
    functioning correctly when sending from flash to mysql. What i am
    doing is doing a contact form in flash which then sends the value
    to a php file which takes the values and inserts them into a table.
    As i'm using icelandic charecters i need the char encoding to be
    either latin1 or utf8 in mysql, or at least i think so. But it
    seems that flash or the php document isn't sending in the same
    format as i have selected in mysql because all special icelandic
    characters come scrambled in the mysql table. Firefox tells me
    tough that the html document containing the flash movie is using
    utf-8.

    I don't know anything about Icelandic characters, but Flash
    generally really likes UTF-8. So it should be sending that if that
    is what it is starting with.
    You aren't using any kind of useCodePage? That will mess it
    up.
    Are you sure that the input method is Icelandic?
    In the testing environment can you list variables (from the
    debug menu) and see if they look proper? If they do then Flash is
    readying them correctly and the problem must be coming in further
    down stream.

  • Character encoding again

    Hi, i havent got any answer so i try to ask again...
    I have created a page from Data Controls.
    I have created parameter form. And table. Detail is showed at the bottom of page (there is shown current row through #bindings. . .
    Everything works fine when i fill something into parameter form table is filtered by that criteria, when current row is changed, the detail is also changed. But when i fill some czech character into parametr form it works pretty bad.
    The table is correctly filtered but when i make any other action after filtering, the table shows no rows.
    I found what cause this problem. It is the Property in bindings that holds value for this parameter. When i first call bindings.findXXX.execute the Propertys value is "č" for example. That is correct and table shows filtered rows. After i perform another action (i think it is not dependent what the action is, but change current row fow example) the value in that Property has changed to "?" instead of "č" and because of this filter is applied again and table shows no rows. I have checked all encoding and character encoding is set to utf-8. Is this the problem and i miss some settings ?
    1) menu tools-preferences-Environment
    2) project properties- compiler-character encoding
    3) in jspx
    <?xml version='1.0' encoding='UTF-8'?>
    <jsp:directive.page contentType="text/html;charset=UTF-8"
    pageEncoding="UTF-8"/>
    <afh:head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
    </afh:head>
    is there other ? i dont know i set this settings long ago ...
    Please if someone know, give me some hint , thanks for help.
    Jdeveloper 10.1.3.0.4(SU5)

    check regional language settings on your machiner where u r application server is running. I faced this problem but was able to resolve by modifying the
    NLS_LANG = AMERICAN_AMERICA.WE8ISO8859P1
    NLS_LANG is under HKEY_LOCAL_MACHINE==>ORACLE
    WE8ISO8859P1 is the standard encoding for my application developed in local indian language and works fine for me
    THIS can help check out
    Amit

  • Character encoding in Web Dynpro source code

    Hi all,
    I want to use special characters like "äüß" in Web Dynpro java code and can't find a possibility to specify the character encoding (ISO-8859-1) for Developer Studio Web Dynpro source editor.
    After rebuilding or compiling the code looks like "äüß".
    In the Java perspective source editor of eclipse/Developer Studio is a menu entry edit/encoding/... but not in Web Dynpro perspective.
    Does anybody know how to set encoding?
    Michael

    Hi Michael,
    Perhaps you can try converting your unicode characters to Java String literal with this web site  (http://www.snible.org/java2/uni2java.html).
    The characters that you supplied convert to "\u00E4\u00FC\u00DF", perhaps inserting that as the String will help.
    Best regards,
    Patrick.

  • Character encoding in Netweaver Developer Studio

    Hi all!
    I've migrated a EP5E Project to P6 and it worked fine. But now I use another workstation and while trying to open a java-file of migrated project I got
    "Error Encoding Problem, this file is unreading using the UTF-8 character encoding".
    The java-file contains german characters like "ä".
    I'm using SAP NetWeaver Developer Studio Version: 2.0.5
    Build id: 200404200353
    Does anyone know, how to set Character Encoding in NetWeaver Developer Studio?
    Thank You

    I've found the solution:
    Changing the encoding used to show the source
    To change the encoding used by the Java editor to display source files:
    With the Java editor open, select Edit > Encoding from the menu bar
    Select an encoding from the menu or select Others and, in the dialog that appears, type in the encoding's name.
    Note: this setting affects only the way the source is presented.
    To change the encoding that the Java editor uses when saving files, specify a text file encoding preference on Window > Preferences > Workbench > Editors.

  • Character Encoding is changing random

    Hello,
    For a short while I'm having the next problem:
    When I am using FireFox, after a while characters on pages are showed as 'boxes'. With setting backView -> Character Encoding to ISO-8859-1 the characters are shown correctly again.
    I do not understand why the character encoding is changing random to UTF-8, while I have selected the ISO-8859-1 and set automatically recognize to false.
    In the options menu I have also set default 'ISO-8859-1' as default character encoding.
    Hoping someone can tell me why it random changes and why the same page can be show correctly for 10 times, but the 11th time the character set has changed? And of course how can I solve this problem.

    I do know a website can determine the character encoding, but the strange part of the problem I occur is that the same page can be shown 10 times correctly, but an 11th time the characters like é and ë, are shown as 'blocks'/'questionmarks'.
    Is there an explanation for that behaviour?

  • FF character encoding issue in Mageia 2 ?

    Hi everyone,
    I'm running Mozilla Firefox 17.0.8 in a KDE distro of Linux called Mageia 2. I'm having problems in character encoding with certain web pages, meaning that certain icons like the ones next to menu entries (Login, Search box etc.) and in section headlines don't appear properly. Instead they appear either in some arabic character or as little grey boxes with numbers and letters written in it.
    I've tried experimenting with different encoding systems: Western (ISO 8859-1), (ISO 8859-15), (Windows 1252), Unicode (UTF-8), Central European (ISO 8859-2) but none of them does the job. Currently the char encoding is set to UTF-8. The same web page in Chrome (UTF-8) gives no such problem.
    Can you help me, please?

    Thank you!
    I solved my problem, however I find fonts are too small for certain web pages when compared to Chrome (see attached pictures of nytimes.com).
    Chrome's font size are set to "Medium".

  • Why, after all these years, can't Thunderbird auto-detect character encoding

    judging by all the existing messages and complaints about this, not to mention erroneous posts that say the problem is solved when it isn't, I have to conclude Mozilla either doesn't believe this is a problem or doesn't care to fix it. The bottom line is that there is no way to tell Thunderbird to automatically display emails in the character coding format they were written in. I could understand cases where the headers are not properly filled in, but I see tons of emails in which the encoding is plainly there in the headers within the message source. You can force it, but if you do so via the menu VIEW->Character Encoding->UTF8 (for example) it won't "stick" if you view another message. But who would want it to "stick" permanently anyway? What the average user really wants is to be able to toggle VIEW->Character Encoding->Auto Detect from its default "off" to simply "on", and not have to bother with it anymore.
    This is a problem that seems to have gone on forever, and it NEVER happens with other email clients. If there is some backdoor way to actually make autodetect work, I'd appreciate knowing about it. But more important, I think ALL users would appreciate it if it were not some secret "backdoor" setting, but a simple global menu choice for all accounts. Can Mozilla please fix this problem once and for all?

    You said...
    ''Thunderbird is supposed to be using the encoding in the mail.''
    I figured is "should", i'm just reporting that it doesn't
    You said...
    ''Setting auto detect to on disables that.''
    Please explain. I've looked at every setting I can find and there is no way to set auto detect to "ON". I DID try setting it to "universal" in an attempt top solve the problem, but I have since restored it to "off", because the universal setting doesn't help.
    you said...
    ''"Based on your earlier response I assume you need to press the F10 key to see the tools menu you were refered to." ''
    No... I never said that anywhere. I DID refer to Menu->View_>Character Encoding, and I did refer to right clicking on individual folders, to get to the properties dialog, and the general information tab. But F-10 doesn't do anything
    You said...
    ''I have examines dozens of mails in my inbox and each honours the character encoding set in the HTML''
    Well, mine NEVER did. A short example from an email I got today pretty much is exemplative of all mail I get from GMAIL...
    --089e013a0572a067a404fc73ceda
    Content-Type: text/plain; charset=UTF-8
    Ok, very good. Thank you. Phoenix sent you a friend request on Facebook by
    the way. Talk to you soon.
    --089e013a0572a067a404fc73ceda
    Content-Type: text/html; charset=UTF-8
    Content-Transfer-Encoding: quoted-printable
    <p dir=3D"ltr">Ok, very good. Thank you. Phoenix sent you a friend request=
    =C2=A0 on Facebook by the way.=C2=A0 Talk to you soon.</p>
    --089e013a0572a067a404fc73ceda--
    See those incidences pf "=C2=A0"? Each one displays as a strange character, a capitol A with a curved line over it. If I manually set my default encoding to UTF 8, the weird characters go away. If I leave it as Western, there is nothing I can do to tell Thunderbird to "auto detect".
    Anyway, I suppose at this point that no one responsible for the product coding is seriously looking at my issue, which is why its never been solved. If anyone does intend to help track it down and solve it, I'll be happy to provide all the examples and screen shots they ask for. Otherwise.

  • Does iDVD work with Maverick? This is the first time I tried to burn a DVD using iDVD after upgrading to Maverick. It seems to get stuck at encoding menu stage and not doing anything.

    I tried to burn a DVD using iDVD after upgrading to Maverick, first time I tried after upgrading to Maverick a few weeks ago. It seems to get stuck at encoding menu stage and not doing anything. Does anyone else have the same problem?

    I did not have a DVD handy to burn but I have just created a small 2 minute video and created a Disk Image and also a VIDEO_TS folder, the encoding worked correctly, so it works with some items.
    You may get a better responce in the iDVD forum.
    https://discussions.apple.com/community/ilife/idvd
    regards

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • XML Character Encoding Using UTL_DBWS

    Hi,
    I have a database with WINDOWS-1252 character encoding. I'm using UTL_DBWS to call a web service method which echoes a given string. For this purpose, I do the following:
    DECLARE
        v_wsdl CONSTANT VARCHAR2(500) := 'http://myhost/myservice?wsdl';
        v_namespace CONSTANT VARCHAR2(500) := 'my.namespace';
        v_service_name CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MyService');
        v_service_port CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MySoapServicePort');
        v_ping CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'ping');
        v_wsdl_uri CONSTANT URITYPE := URIFACTORY.getURI(v_wsdl);
        v_str_request CONSTANT VARCHAR2(4000) :=
    '<?xml version="1.0" encoding="UTF-8" ?>
    <ping>
        <pingRequest>
            <echoData>Dev Team üöäß</echoData>
        </pingRequest>
    </ping>';
        v_service UTL_DBWS.SERVICE;
        v_call UTL_DBWS.CALL;
        v_request XMLTYPE := XMLTYPE (v_str_request);
        v_response SYS.XMLTYPE;
    BEGIN
        DBMS_JAVA.set_output(20000);
        UTL_DBWS.set_logger_level('FINE');
        v_service := UTL_DBWS.create_service(v_wsdl_uri, v_service_name);
        v_call := UTL_DBWS.create_call(v_service, v_service_port, v_ping);
        UTL_DBWS.set_property(v_call, 'oracle.webservices.charsetEncoding', 'UTF-8');
        v_response := UTL_DBWS.invoke(v_call, v_request);
        DBMS_OUTPUT.put_line(v_response.getStringVal());
        UTL_DBWS.release_call(v_call);
        UTL_DBWS.release_all_services;
    END;
    /Here is the SERVER OUTPUT:
    ServiceFacotory: oracle.j2ee.ws.client.ServiceFactoryImpl@a9deba8d
    WSDL: http://myhost/myservice?wsdl
    Service: oracle.j2ee.ws.client.dii.ConfiguredService@c881d39e
    *** Created service: -2121202561 - oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220 ***
    ServiceProxy.get(-2121202561) = oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220
    Collection Call info: port={my.namespace}MySoapServicePort, operation={my.namespace}ping, returnType={my.namespace}PingResponse, params count=1
    setProperty(oracle.webservices.charsetEncoding, UTF-8)
    dbwsproxy.add.map: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.lookup.map: ns, my.namespace
    createElement(ns:ping,null,my.namespace)
    dbwsproxy.add.soap.element.namespace: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.element.node.child.3: 1, null
    createElement(echoData,null,null)
    dbwsproxy.text.node.child.0: 3, Dev Team üöäß
    request:
    <ns:ping xmlns:ns="my.namespace">
       <pingRequest>
          <echoData>Dev Team üöäß</echoData>
       </pingRequest>
    </ns:ping>
    Jul 8, 2008 6:58:49 PM oracle.j2ee.ws.client.StreamingSender _sendImpl
    FINE: StreamingSender.response:<?xml version = '1.0' encoding = 'UTF-8'?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header/><env:Body><ns0:pingResponse xmlns:ns0="my.namespace"><pingResponse><responseTimeMillis>0</responseTimeMillis><resultCode>0</resultCode><echoData>Dev Team üöäß</echoData></pingResponse></ns0:pingResponse></env:Body></env:Envelope>
    response:
    <ns0:pingResponse xmlns:ns0="my.namespace">
       <pingResponse>
          <responseTimeMillis>0</responseTimeMillis>
          <resultCode>0</resultCode>
          <echoData>Dev Team üöäß</echoData>
       </pingResponse>
    </ns0:pingResponse>As you can see the character encoding is broken in the request and in the response, i.e. the SOAP encoder does not take into consideration the UTF-8 encoding.
    I tracked down the problem to the method oracle.jpub.runtime.dbws.DbwsProxy.dom2SOAP(org.w3c.dom.Node, java.util.Hashtable); and more specifically to the calls of oracle.j2ee.ws.saaj.soap.soap11.SOAPFactory11.
    My question is: is there a way to make the SOAP encoder use the correct character encoding?
    Thanks a lot in advance!
    Greetings,
    Dimitar

    I found a workaround of the problem:
        v_response := XMLType(v_response.getBlobVal(NLS_CHARSET_ID('CHAR_CS')), NLS_CHARSET_ID('AL32UTF8'));Ugly, but I'm tired of decompiling and debugging Java classes ;)
    Greetings,
    Dimitar

  • XML parser not detecting character encoding

    Hi,
    I am using Jdeveloper 9.0.5 preview and the same problem is happening in our production AS 9.0.2 release.
    The character encoding of an xml document is not correctly being detected by the oracle v2 parser even though the xml declaration correctly contains
    <?xml version="1.0" encoding="ISO-8859-1" ?>
    instead it treats the document as UTF8 encoding which is fine until a document comes along with an extended character which then causes a
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:448)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2023)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:972)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2589)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:485)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:192)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:144)
    as you can see it is explicitly casting the XMLUTF8Reader to perform the read.
    I can get around this by hard coding the xml input stream to be processed by a reader
    XMLSource = new StreamSource(new InputStreamReader(XMLInStream,"ISO-8859-1"));
    however the manual documents that the character encoding is automatically picked up from the xml file and casting into a reader is not necessary, so I should be able to write
    XMLSource = new StreamSource(XMLInStream)
    Does anyone else experience this same problem?
    having to hardcode the encoding causes my software to lose flexibility.
    Jarrod Sharp.

    An XML document should be created with 'ISO-8859-1' encoding to be parsed as 'ISO-8859-1' encoding.

Maybe you are looking for

  • My Archie boot is very slow on my laptop

    Hi guys. I have a serious problem cause my boot time is about 3 minutes. I think that something happened with ata02 but I am not sure. What I must do to check where is the problem and to fix it? My demesg is: Linux version 2.6.23-ARCH (root@T-POWA-LX

  • How to retrieve values in XML into a plsql table type

    Hi I have an XML doc like below which is a Web Service response message. <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http

  • Can't Install Itunes 8.2 Vista 32 bit

    I have used Itunes for the past two yrs on this machine with no problems till this recent version of itunes was released. I have had nothing but problems trying to install. Running windows Vista Home...I keep reaching a point in the install and I get

  • Vendor cutover / open items upload

    Dear SAP Gurus, I am going to upload vendor open items / cutover data on the server before GOLIVE. I just need to know whether the open items are to be uploaded : inclusive of the TDS i.e the Gross amount OR It has to be without the TDS amount and th

  • Which video capture card/divice need to use for adobe premier pro

    which video capture card/divice need to use for adobe premier pro