Character Encoding is changing random

Hello,
For a short while I'm having the next problem:
When I am using FireFox, after a while characters on pages are showed as 'boxes'. With setting backView -> Character Encoding to ISO-8859-1 the characters are shown correctly again.
I do not understand why the character encoding is changing random to UTF-8, while I have selected the ISO-8859-1 and set automatically recognize to false.
In the options menu I have also set default 'ISO-8859-1' as default character encoding.
Hoping someone can tell me why it random changes and why the same page can be show correctly for 10 times, but the 11th time the character set has changed? And of course how can I solve this problem.

I do know a website can determine the character encoding, but the strange part of the problem I occur is that the same page can be shown 10 times correctly, but an 11th time the characters like é and ë, are shown as 'blocks'/'questionmarks'.
Is there an explanation for that behaviour?

Similar Messages

  • How to prevent Terminal's character encoding from changing

    I have a command-line script that runs continuously, and occasionally echos to STDOUT some binary data that, apparently, changes the way Terminal displays certain characters. Is there any way of preventing this from happening? Say, by locking down Terminal's character-encoding?
    ...Rene

    Rene,
    I am not sure if you can prevent this thing from happening but you can set things back to normal by using the reset.
    Another possible solution is to write the output of STDOUT to a file so that it will not be displayed on screen.
    Mihalis.

  • Issue with Google Analytics Character Encoding (Contribute changes the code)

    I am wondering if there are any admin settings to work around this issue. This is an issue with Contribute CS3/CS4.
    Editing a page in Contribute turns encoded characters into unencoded characters. For example, turning "%3C" into "<". Encoded version has no errors.
    Un-encoded characters in Google Analytics code causes Scripting Errors on each page you open in Contribute, make loading slower, and make it difficult just to browse the site using Contribute.
    Specifically changing this:
    <script type="text/javascript">
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
    document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
    </script>
    to this:
    <script type="text/javascript">
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
    document.write(unescape("<script src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'></script>"));
    </script>
    (I believe it is the double </script></script> that might be causing the error to pop up.)

    Set the link encoding option as "Insert Links As Is", under Administer website dialog.

  • When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    Thanks for that information!
    I'm sure I will be calling AppleCare, but the problem is, they charge for the phone calls don't they? Because I don't have money to be spending to be on the phone with a support service.
    In other things, it seemed like the only time my MacBook was working was when I had Snow Leopard without the 10.6.8 update download that was supposed to be done to prepare for OS X Lion.
    When I look at the information of my HD it says that I have 10.6.8 but that was the install that it claimed to have failed and caused me to restart resulting in all of the repeated problems.
    Also, because my computer is currently down, and I've lost all files how would that effect the use of my iPhone? Because if it doesn't get fixed by the time OS 5 is released, how would I be able to upgrade?!

  • Change character encoding?

    Edit > Preferences > New Document > Character
    encoding is set to UTF-8
    However when I edit documents with non-standard extensions
    (I'm working on PHP files with a .ctp extension) the document still
    seems to save in iso-8859-1 format. This problem doesn't seem to
    occur on files with .php/html extensions.
    Does anyone know of a solution to this problem?
    Thanks,
    Emil

    I'm not sure where you are getting %xx encoded UTF-8.... Is it cuz you have it in a GET method form and that's what you are seeing in the browser's location bar? ...
    Let's assume you have a form on a page, and the page's charset is set to UTF-8, and you want to generate a URL encoded string (%xx format, although URLEncoder will not encode ASCII chars that way...).
    In the page processing the form, you need to do this:
    request.setCharacterEncoding("UTF-8"); // makes bytes read as UTF-8 strings(assumes that the form page was properly set to the UTF-8 charset)
    String fieldValue = request.getParameter("fieldName"); // get value
    // the value is now a Unicode String in Java, generated from reading the bytes submitted from the form as UTF-8 encoded text...
    String utf8EncString = URLEncoder.encode(fieldValue, "UTF-8");
    // now utf8EncString is a URL encoded (%xx) string of UTF-8 values
    String euckrEncString = URLEncoder.encode(fieldValue, "EUC-KR");
    // now euckrEncString is a URL encoded (%xx) string of EUC-KR valuesWhat is probably screwing things up for you mostly is this:
    euckrValue = new String(utf8Value.getBytes(), "EUC-KR");
    What this does is takes the bytes of the string utf8Value (which is not really UTF-8... see below) in the local encoding (possibly Cp1252 (Windows) or ISO8895-1 (Linux), or EUC-KR if it's Korean Windows), and then reads them as if they were EUC-KR... which they aren't.
    The key here is that Strings in Java are not of any encoding. They are pure Unicode values. Encodings only matter when converting to or from bytes. The strings stored in a file or sent over the net have to convert to bytes since that's what is stored/sent, just bytes. The encoding defines how the characters can be encoded into 1 or more bytes, and thus reconstructed.

  • How to change character encoding in firefox 4 without enabling menu bar

    Occasionally I need to try various character encoding so the characters look correctly. However, with the new Firefox 4 interface, the only place I can find to do that is to re-enable the old style Menu Bar then select View->Character Encoding

    You can find the Character Encoding in the Firefox > Web Developer sub menu.

  • Why differing Character Encoding and how to fix it?

    I have PRS-950 and PRS-350 readers, both since 2011.  
    In the last year, I've been getting books with Character Encoding that is not easy to read.  In playing around with my browsers and View -> Encoding menus, I have figured out that it has something to do with the character encoding within the epub files.
    I buy books from several ebook stores and I borrow from the library.
    The problem may be the entire book, but it is usually restricted to a few chapters, with rare occasion where the encoding changes within a chapter.  Usually it is for a whole chapter, not part, and it can be seen in chapters not consecutive to each other.
    It occurs whether the book is downloaded directly to my 950 reader or if I load it to either reader from my computer(s), which are all Mac OS X of several versions fom 10.4 to Mountain Lion.  SInce it happens when the book is downloaded directly, I figure the operating system of my computer is not relevant.
    There are several publishers involved, though Baen (no DRM ebooks) has not so far been one of them.
    If I look at the books with viewers on the computer, the encoding is the same.  I've read them in Calibre, in the Sony Reader App, and in Adobe Digital Editions 2.0.  It's always the same.
    I believe the encoding is inherent to the files.  I would like to fix this if I can to make the books I've purchased, many of them in paper and electronically, more enjoyable to read on my readers.
    Example: I’ve is printed instead of I've.
    ’ for apostrophe
    “ the opening of a quotation,
    â€?  for closing the quotation,
    and I think — is for a hyphen.
    When a sentence had “’m  for " 'm at the beginning of a speech (when the character was slurring his words) it took me a while to figure out how it was supposed to read.
    “’Sides, â€™tis only for a moon.  That ain’t long.â€?
    was in one recent book.
    Translation: " 'Sides, 'tis only for a moon. That ain't long."
    See what I mean? 
    Any ideas?

    Hi
    I wonder if it’s possible to download a free ebook with such issue, in order to make some “tests”.
    Perhaps it’s possible, on free ebooks (without DRM), to add fonts by using softwares like Sigil.

  • How can I tell what character encoding is sent from the browser?

    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

    From what I understand (I haven't used it yet) 6.1 supports the 2.3
    servlet spec. That should have a method to set the encoding.
    Otherwise, I don't think you can support multiple encodings in one
    instance of WebLogic.
    From what I know browsers don't give any indication at all about what
    encoding they're using. I've read some chatter about the HTTP spec
    being changed so it's always UTF-8, but that's a Some Day(TM) kind of
    thing, so you're stuck with all the stuff out there now which doesn't do
    everything in UTF-8.
    Sorry for the bad news, but if it makes you feel any better I've felt
    your pain. Oh, and trying to process multipart/form-data (file upload)
    forms is even worse and from what I've seen the API that people talk
    about on these newsgroups assumes everything is ISO-8859-1.
    Emmy Lau wrote:
    >
    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

  • Where is the Character Encoding option in 29.0.1?

    After the new layout change, the developer menu don't have Character Encoding options anymore where the hell is it? It's driving me mad...

    You can also this access the Character Encoding via the View menu (Alt+V C)
    See also:
    *Options > Content > Fonts & Colors > Advanced > Character Encoding for Legacy Content

  • How to set the system default file character encoding to UTF-8?

    Hi all. This is driving me nuts, on both my Windows box and Snow Leopard; I figure much more chance of finding the answer for OS X.
    My language and locale are set to Australian English. $LANG=en_AU.UTF-8
    However, as I believe is expected, OS X (and Windows for that matter) will create files by default with character encoding of Cp1252 (Latin-1). That is, the FILE encoding in the file metadata - the Byte Order Mark I believe. The file itself, not the characters written to it.
    This, in a word, bites. I don't want to be restricted to only ASCII by default, and it is causing me problems with certain software (a Firefox plugin) that creates text files, passing in UTF-8 encoded content, which is then mangled because the file encoding itself is still Cp1252. (I know, I've tested this by changing the file encoding manually and having it overwritten again by the plugin: works correctly.)
    As a simple example, just `touch somefile` from terminal creates a file in Cp1252 -- I'm obtaining that info by opening in jEdit by the way (anyone know of something better?).
    In other locales that are not English-based, I believe the default file encoding is UTF-8. But surely this can be controlled independently? There must be a system configuration value somewhere that specifies file encoding default. Can someone please tell me what it is?
    Thanks!

    However, as I believe is expected, OS X (and Windows for that matter) will create files by default with character encoding of Cp1252 (Latin-1). That is, the FILE encoding in the file metadata - the Byte Order Mark I believe. The file itself, not the characters written to it.
    Apps like TextEdit and Mail have settings that let you determine the encoding of text produced. The default would normally depend on the character content of the file, ranging from ASCII for basic English to Windows Latin-1 (Win 1252) or ISO Latin -1 (ISO 8859-1) to UTF-8 for other content.
    Win 1252 is not ASCII, but has twice the number of characters in the latter.
    Byte Order Mark is something totally different --it's a particular character used to signal certain encodings.
    http://en.wikipedia.org/wiki/Byteordermark
    As a simple example, just `touch somefile` from terminal creates a file in Cp1252 -- I'm obtaining that info by opening in jEdit by the way (anyone know of something better?).
    For what Terminal does and how to change it, it might best to post in the Unix forum:
    http://discussions.apple.com/forum.jspa?forumID=735
    For problems with a FireFox plugin, it might be good to ask on their own forums as well.

  • Character Encoding question

    I'm helping another group out on this so I'm pretty new to this stuff so please go easy on me if I ask anything that is obvious.
    We have a J2EE web application that is sitting on a Red Hat Linux box and is being served up by OAS 10.1.3. The application reads an xml file which contains the actual content of the page and then pulls in the navigation and metadata from other sources.
    Everyone works as it should but there is one issue that has been ongoing for a while and we would like to close it off. In my content source xml file, I have encoded special characters such as é as & amp;#233; - when I view the web page all is well (I see the literal value of é but when I do a view source, I see & #233;
    If I put & #233; into the source xml file, the page still displays with é but when I do a view source on the web page, I see literal é value in the source which is not what we want. What is decoding the character reference? While inserting & amp;#233; into the xml source file works, we do not want to have to encode everything that way we would prefer the have & #233;. Is it a setting of the OS, the Application Server or the Application itself?
    When I previewed this post, I noticed that by typing & amp;#233; as one solid word, it gets decoded as is seen as é so I had to put a space between the & and the amp to get my properly explain myself.
    Any help would be appreciated!
    Thanks,
    /HH

    There are a lot of notes on MetaLink about character encoding. I wrote Note 337945.1 a while ago, which explains this into more detail. I will quote some relevant to your situation:
    For the core components, there are three places to set NLS_LANG:
    - in the system environment (this is obvious)
    - in the file opmn.xml
    - in the file apachectl
    A. Changing opmn.xml
    - go to $ORACLE_HOME/opmn/conf and edit the file opmn.xml
    - Search for the OC4J container your application runs in.
    - Within the <process-type.... > </process-type> section, add an entry similar to:
    (1) OracleAS 10g (10.1.2, 10.1.3):
    <environment>
         <variable id="NLS_LANG" value="ENGLISH_UNITED KINGDOM.AL32UTF8"/>
    </environment>
    B. Changing apachectl (Unix only)
    - Go to $ORACLE_HOME/Apache/Apache/bin
    - Open the file 'apachectl'
    - search for NLS_LANG
    e.g.
    NLS_LANG=${NLS_LANG=""}; export NLS_LANG
    Verify if the variable is getting the correct value; this may depend on your environment and on the version of OracleAS. If necessary, change this line. In this example, the value from the environment is taken automatically.
    There is more on this topic in the mod_plsql area but since you do not mention pulling data from the database, this may be less relevant. Otherwise you need to ensure the same NLS_LANG and character set is used in the database to avoid conversions.

  • Character encoding again

    Hi, i havent got any answer so i try to ask again...
    I have created a page from Data Controls.
    I have created parameter form. And table. Detail is showed at the bottom of page (there is shown current row through #bindings. . .
    Everything works fine when i fill something into parameter form table is filtered by that criteria, when current row is changed, the detail is also changed. But when i fill some czech character into parametr form it works pretty bad.
    The table is correctly filtered but when i make any other action after filtering, the table shows no rows.
    I found what cause this problem. It is the Property in bindings that holds value for this parameter. When i first call bindings.findXXX.execute the Propertys value is "č" for example. That is correct and table shows filtered rows. After i perform another action (i think it is not dependent what the action is, but change current row fow example) the value in that Property has changed to "?" instead of "č" and because of this filter is applied again and table shows no rows. I have checked all encoding and character encoding is set to utf-8. Is this the problem and i miss some settings ?
    1) menu tools-preferences-Environment
    2) project properties- compiler-character encoding
    3) in jspx
    <?xml version='1.0' encoding='UTF-8'?>
    <jsp:directive.page contentType="text/html;charset=UTF-8"
    pageEncoding="UTF-8"/>
    <afh:head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
    </afh:head>
    is there other ? i dont know i set this settings long ago ...
    Please if someone know, give me some hint , thanks for help.
    Jdeveloper 10.1.3.0.4(SU5)

    check regional language settings on your machiner where u r application server is running. I faced this problem but was able to resolve by modifying the
    NLS_LANG = AMERICAN_AMERICA.WE8ISO8859P1
    NLS_LANG is under HKEY_LOCAL_MACHINE==>ORACLE
    WE8ISO8859P1 is the standard encoding for my application developed in local indian language and works fine for me
    THIS can help check out
    Amit

  • c:import character encoding problem (utf-8)

    Aloha @ all,
    I am currently importing a file using the <c:import> functionallity (<c:import url="module/item.jsp" charEncoding="UTF-8">) but it seems that the returned data is not encoded with utf-8 and hence not displayed correctly. The overall file header is:
    HTTP/1.1 200 OK
    Server: Apache-Coyote/1.1
    Set-Cookie: JSESSIONID=E67F9DAF44C7F96C0725652BEA1713D8;
    Content-Type: text/html;charset=UTF-8
    Content-Length: 6861
    Date: Thu, 05 Jul 2007 04:18:39 GMT
    Connection: close
    I've set the file-encoding on all pages to :
    <%@ page contentType="text/html;charset=UTF-8" %>
    <%@ page pageEncoding="UTF-8"%>
    but the error remains... is this a known bug and is there a workaround?

    Partially, yes. It turns out that I created the documents in eclipse with a different character encoding. Hence the entire document was actually not UTF-encoded...
    So I changed each document encoding in Eclipse to UTF and got it working just fine...

  • Default Character Encoding stuck on UTF-8 - Firefox 7

    I cannot change the Character Encoding - it is stuck on Unicode UTF-8 and I can not change it! When a web page opens I get these little boxes with "FF FD" instead of Quote marks. When I change the character encoding on that page using "View->Character Encoding" and click on the Western (ISO-8859-1), the page displays correctly. Every page opens using Unicode UTF-8 as the default.
    View->Character Encoding -- shows Unicode UTF-8 as the default.
    View->Character Encoding->Auto-detect -- shows OFF
    Tool->Options->Content->Advance->Fonts->Default Character Encoding -- shows Western (ISO-8859-1) as well as the "Allow Pages to choose their own fonts..." IS CHECKED in the check box
    THE PAGES ARE NOT UTF-8!!!! The "View Page Source" IS NOT Unicode UTF-8! -- It shows <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">.
    The "View Page Info" shows MetaTag - Content-Type: text/html; charset=iso-8859-1
    Why can I not change the Default Character Encoding?
    I would also like to point out that the Unicode UTF-8 seems to be broken because it is indicating that the QUOTE CHARACTER is an UNPRINTABLE character "FF FD"
    ----- EDIT -----
    The UTF-8 is not broken. The problem as pointed out in http://en.wikipedia.org/wiki/Replacement_character#Replacement_character is that my Firefox being STUCK processing UTF-8 encoding cannot read the clearly marked iso-8859-1 data. So the UTF-8 is reinterpreting smart quotes -&ldquo; and &rdquo;- (“ and ”) as replacement (unprintable) characters.
    So the real problem is why my Firefox is stuck on Unicode UTF-8

    The real problem is that the font that is used doesn't have those characters.
    Do you see the special quotes -“ and ” on this forum page?
    Does it help if you disable the website fonts and set another font as the default font?
    *Tools > Options > Content : Fonts & Colors > Advanced
    *http://en.wikipedia.org/wiki/Punctuation
    *http://en.wikibooks.org/wiki/Unicode/Character_reference/2000-2FFF

  • (nokia N8) Character encoding - reduced support (n...

    I'm from Poland, and in my language i use special letters like ó,ż,ź,ą,ę. When i want write a massage and use one of above letters, my message is shorter 90 signs. In older Nokia's phone i change settings text messages (character encoding:full support=>reduced support). When i change the same settings in Nokia N8, this settings doesn't work! I always see shorter message, strangest is that when i choose "conversations" i don't have any problems, everything works fine. When i choose "new message" i can't use reduced support (this option i on, but not working) My software version:011.012 Software version date: 2010-09-18 Release: PR1.0 Custom Version: 011.012.00.01 Custom version date: 2010-09-18 Language set: 011.012.03.01 Product code: 0599823

    Talking about product code changes is prohibited in this forum. It is unofficial and is grounds for Nokia to refuse to repair or service your phone in any way.
    If you find my post helpful please click the green star on the left under the avatar. Thanks.

Maybe you are looking for

  • PIS e COFINS na remessa da consignação

    Olá pessoal! Estamos implantando MP135 (4.6 -TAXBRJ) e o cliente utilizava ZPIS e ZCOF para calcular e contabilizar o PIS e COFINS em conta de compensação no processo da remessa para consignação e agora com a MP135 onde tevemos o PIS e COFINS como im

  • Need help for learning how to develop interfaces for Oracle R12 EBS

    Hi all, I need to learn how to create interfaces in PL/SQL for Oracle R12 EBS Financials. I cannot find a good starting point for the documentation and examples to help me get started in this area. Would appreciate tips for this area.

  • SQL*Loader how to handle decimal place

    Hi all I need to load data into table with number field (12,2) however, the source data is 1,025.02. Then the data cannot be loaded and in log, there is error message Invalid Number Please kindly advise how to handle this in control file? Thanks Thom

  • SQL Server Equivalent of this for Oracle.

    I was just wondering if anyone know of any links similar to this(http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/poster.html#). The fifth tab has an extremely informative picture of the architecture. Anything

  • OS 10.7.5 time machine stuck on prepping

    I am running 10.7.5 and my time machine has bee stuck on prepping for weeks. I have seen other fixes in the support posts, but they aren't for Lion. Does anybody know if the fix is the same or is there something new? Thanks