Question about character encoding

Hey everyone,
I've been trying to read about character encodings and how Java IO uses them, however, there is something I still don't understand:
I am trying to read an HTML file encoded in UTF-8 and which contain some Arabic characters.
I used this code to save it on a file:
URL u = new URL("http://www.fatafeat.com");    // some Arabic website
URLConnection connection = u.openConnection();
BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("datat.txt"));
int c;
while ((c = reader.read())!=-1)
writer.write(c);However, the output on the file is always "??????"
I used another version of the code
URL u = new URL("http://www.fatafeat.com");     //some Arabic website
URLConnection connection = u.openConnection();
InputStream reader = connection.getInputStream();
FileOutputStream writer = new FileOutputStream("datat.txt");
int c;
while ((c = reader.read()) != -1)
writer.write(c);And I get on the file something like this " ÇáæÍíÏÉ áÝä "
I tried to open the file from a browser using a UTF-8 encoding, but still the same display. What am I missing here? Thanks

Upon further investigation, my initial hunch appears to be correct. The page in question is not correctly encoded UTF-8, so the Java decoder fails. By default, the java decoder fails silently and replaces malformed input with the character "\uFFFD". If you want more control over the decoding process, you need to read the data as bytes and use the CharsetDecoder class to convert to characters. Here is small example to illustrate:
import java.io.ByteArrayOutputStream;
import java.io.CharArrayWriter;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CoderResult;
import java.nio.charset.CodingErrorAction;
public class FatafeatToy3
    private static final Charset UTF8 = Charset.forName("UTF-8");
    private static final Charset DEFAULT_DECODER_CHARSET = UTF8;
    private final CharsetDecoder decoder;
    public FatafeatToy3(final Charset cs)
        this.decoder = cs.newDecoder();
        this.decoder.onMalformedInput(CodingErrorAction.REPORT);
        this.decoder.onUnmappableCharacter(CodingErrorAction.REPORT);
    public FatafeatToy3()
        this(DEFAULT_DECODER_CHARSET);
    public ByteBuffer pageSlurp (URL url) throws IOException
        ByteArrayOutputStream pageBytes = new ByteArrayOutputStream ();
        InputStream is = url.openStream();
        int ch;
        while ((ch = is.read()) != -1)
            pageBytes.write(ch);
        is.close();
        return ByteBuffer.wrap(pageBytes.toByteArray());
    public CoderResult decodeSome(ByteBuffer in, CharBuffer out)
        decoder.reset();
        CoderResult result = decoder.decode(in, out, true);
        if (result.isMalformed())
            System.err.printf("Malformed input detected at pos 0x%x%n", in.position());
        else if (result.isUnmappable())
            System.err.printf("Unmappable input detected at pos 0x%x%n", in.position());
        else if (result.isUnderflow())
            result = decoder.flush(out);
        return result;
    public static void main(String[] args) throws Exception
        FatafeatToy3 ft = new FatafeatToy3();
        ByteBuffer in = ft.pageSlurp(new URL("http://www.fatafeat.com"));
        System.out.printf("Page slurped contains %d bytes%n", in.capacity());
        CharBuffer out = CharBuffer.allocate(1); // one character at a time
        CharArrayWriter pageChars = new CharArrayWriter();
        CoderResult result = CoderResult.UNDERFLOW;
        while ( (! result.isError()) && in.remaining() > 0)
            result = ft.decodeSome(in, out);
            if (result.equals(CoderResult.OVERFLOW))
                out.flip();
                pageChars.append(out);
                out.clear();
}

Similar Messages

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • A question about character addition.

    Hi,
    i have a question about character. seet the following code.
    DATA: lv_entryid TYPE char3.
    lv_entryid = '001'.
    WRITE : / lv_entryid.
    lv_entryid = lv_entryid + 1.
    WRITE : / lv_entryid.
    the  answer is :
    001
    2
    but the result i expect is
    001
    002.
    how to process, Could you please help me?
    Thanks in advance.

    DATA: lv_entryid TYPE char3.
    lv_entryid = '001'.
    WRITE : / lv_entryid.
    lv_entryid = lv_entryid + 1.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
         EXPORTING
              input  = lv_entryid
         IMPORTING
              output = lv_entryid.
    WRITE : / lv_entryid.
    Try this code and see
    Regards
    Gopi

  • A question about Media Encoder CS4

    Hi all,
    With H.264 format and ipod video large preset selected, what does the profile and level under the basic video settings catagory mean?

    Thanks....
    Man, the link's explanation is more complex than the question! 

  • Question about database encoding

    what is the recommended way to store japanese into a mysql or oracle database?
    should i use native2ascii and store the ascii result in the database?
    like the same way i did for my properties file/resource bundle
    any comments appreciated

    So, let me try and guess what that is all about.
    You paste Japanese text into phpmyadmin (whatever that is). That updates the database somehow? And then there is something that gets data from the database into the front end (whatever that is).
    There are several opportunities for messing up here. Pasting text into phpmyadmin -- that could be a problem but it's name sounds like it's something to do with PHP, not Java. And you have to paste, not type? Getting data from the database to the front end -- there could be a faulty conversion from chars to bytes in there. Especially if the front end is a web client like a browser.
    Anyway, don't try to debug this mess all at once. Debug each of the parts separately. The first tool you are going to need is something that can reliably display Japanese characters that are stored in MySQL. If you don't have that then you won't be able to test anything.

  • Character Encoding question

    I'm helping another group out on this so I'm pretty new to this stuff so please go easy on me if I ask anything that is obvious.
    We have a J2EE web application that is sitting on a Red Hat Linux box and is being served up by OAS 10.1.3. The application reads an xml file which contains the actual content of the page and then pulls in the navigation and metadata from other sources.
    Everyone works as it should but there is one issue that has been ongoing for a while and we would like to close it off. In my content source xml file, I have encoded special characters such as é as & amp;#233; - when I view the web page all is well (I see the literal value of é but when I do a view source, I see & #233;
    If I put & #233; into the source xml file, the page still displays with é but when I do a view source on the web page, I see literal é value in the source which is not what we want. What is decoding the character reference? While inserting & amp;#233; into the xml source file works, we do not want to have to encode everything that way we would prefer the have & #233;. Is it a setting of the OS, the Application Server or the Application itself?
    When I previewed this post, I noticed that by typing & amp;#233; as one solid word, it gets decoded as is seen as é so I had to put a space between the & and the amp to get my properly explain myself.
    Any help would be appreciated!
    Thanks,
    /HH

    There are a lot of notes on MetaLink about character encoding. I wrote Note 337945.1 a while ago, which explains this into more detail. I will quote some relevant to your situation:
    For the core components, there are three places to set NLS_LANG:
    - in the system environment (this is obvious)
    - in the file opmn.xml
    - in the file apachectl
    A. Changing opmn.xml
    - go to $ORACLE_HOME/opmn/conf and edit the file opmn.xml
    - Search for the OC4J container your application runs in.
    - Within the <process-type.... > </process-type> section, add an entry similar to:
    (1) OracleAS 10g (10.1.2, 10.1.3):
    <environment>
         <variable id="NLS_LANG" value="ENGLISH_UNITED KINGDOM.AL32UTF8"/>
    </environment>
    B. Changing apachectl (Unix only)
    - Go to $ORACLE_HOME/Apache/Apache/bin
    - Open the file 'apachectl'
    - search for NLS_LANG
    e.g.
    NLS_LANG=${NLS_LANG=""}; export NLS_LANG
    Verify if the variable is getting the correct value; this may depend on your environment and on the version of OracleAS. If necessary, change this line. In this example, the value from the environment is taken automatically.
    There is more on this topic in the mod_plsql area but since you do not mention pulling data from the database, this may be less relevant. Otherwise you need to ensure the same NLS_LANG and character set is used in the database to avoid conversions.

  • Character encoding in JSP

    hi all.
    my problem is about character encoding in JSP.
    my project is based on struts framework & mysql database. as a servlet container i have the Tomcat absolutely.
    i have correctly set the mysql db. when i insert data by hand, usin' INSERT INTO bla bla, it works with Turkish Characters.
    After that, i have checked, my jsp page correctly loads data from db and displays on browser. all the special Turkish characters appears well.
    The problem starts with posting!
    I want some data from the user, and i have simple wysiwyg javascript editor. the editor correctly process the text, after posting data and saving into db, some how it corrupts.
    (whatever, i have tried it with a simple textarea, it does not work)
    simply my problem is: the data somehow corrupts while it is being posted.
    thanx.

    on the form processing page, you probably need to call request.setCharacterEncoding("UTF-8") (or whatever encoding you're using) before reading any values.
    Reply #14 of this post has some test page with Chinese which should work as is...
    http://forum.java.sun.com/thread.jspa?forumID=513&threadID=546863

  • Locale and character encoding. What to do about these dreadful ÅÄÖ??

    It's time for me to get it into my head how this works. Please, help me understand before I go nuts.
    I'm from Sweden and we use a few of these weird characters like ÅÄÖ.
    If I create a file called "övrigt.txt" in windows, then the file will turn up as "?vrigt.txt" on my Linux pc (At least in the console, sometimes it looks ok in other apps in X). The same is true if I create the file in Linux and copy it to Windows, it will look just as weird on the other side.
    As I (probably) can't change the way windows works, my question is what I have to do to have these two systems play nicely with eachother?
    This is the output from locale:
    LANG=en_US.utf8
    LC_CTYPE="en_US.utf8"
    LC_NUMERIC="en_US.utf8"
    LC_TIME="en_US.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_US.utf8"
    LC_MESSAGES="en_US.utf8"
    LC_PAPER="en_US.utf8"
    LC_NAME="en_US.utf8"
    LC_ADDRESS="en_US.utf8"
    LC_TELEPHONE="en_US.utf8"
    LC_MEASUREMENT="en_US.utf8"
    LC_IDENTIFICATION="en_US.utf8"
    LC_ALL=
    Is there anything here I should change? I have tried using ISO-8859-1 with no luck. Mind you that I want to have the system wide language set to english. The only thing I want to achieve is that "Ö" on widows should turn up as "Ö" i Linux as well, and vice versa.
    Please save my hair from being torn off, I'm going bald here...

    Hey, thanks for all the answers!
    I share my files in a number of ways, but mainly trough a web application called Ajaxplorer (very nice btw...). The thing is that as soon as a windows user uploads anything with special chatacters in the file name my programs, xbmc, console etc, refuses to read them correctly. Other ways of sharing is through file copying with usb sticks, ssh etc. It's really not the way of sharing that is the problem I think, but rather the special characters being used sometimes.
    I could probably convert the filenames with suggested applications but then I'll set the windows users in trouble when they want to download them again, won't I?
    I realize that it's cp1252 that is the bad guy in this drama. Is there no way to set/use cp1252 as a character encoding in Linux? It's probably a bad idea as utf8 seems like the future way to go, but the fact that these two OS's can't communicate too well in this area is pretty useless if you ask me.
    To wrap this up I'll answer some questions...
    @EVRAMP: I'm actually using pcmanfm, but that is only for me and I'm not dealing very often with vfat partitions to be honest.
    @pkervien: Well, I think I mentioned my forms of sharing above. (kul med lite arch-svenskar!)
    @quarkup: locale.gen is edited and both sv.SE and en_US have utf-8 and ISO-8859 enabled and generated.
    ...and to clearify things even further. It doesn't matter if I get or provide a file via a usb stick, samba, ftp or by paper. All I want is for "Ö" to always be "Ö", everywhere.
    I can't believe how hard this is to get around. Linus is finish for crying out loud. I thought he'd sorted this out the first thing he did. Maybe he doesn't deal with windows or their users at all

  • Cannot save a file with Windows Latin 1 encoding or Mac OS Roman encoding.  Has anything changed about character encodings, Cannot save a file with Windows Latin 1 encoding or Mac OS Roman encoding.  Has anything changed about character encodings

    Hi everybody,
    I am in France. I have juste installed Lion.  I have to regularly send php files on to a website on a Linux Gentoo under Apache webserver. Up until now I used Smultron (a text and code editor) and saved the files in Mac Os Roman Character encoding.  This does not work anymore and the French accuentated characters are not displayed correctly on the web site. The website is ISO-8859-1.
    I have tried within Smultron but also within TextEdit, to copy and paste a correctly accentuated version of one of the php files and then save it in another encoding but neither apps let me use Mac Os Roman or Windows Latin or even Latin 1, I can only save using UTF8 and I have the following message :
    " This document can no longer be saved using its Western (Iso Latin 1) encoding"
    If I type a small text with accents and save it as a php file in Iso Latin 1, it works, but not the big files with php code within...
    Anybody can help ?  I cannot send links to the web pages because they are password protected.
    Thank You very much

    I find part of your post rather confusing.  MacRoman and ISO-8859-1 are totally different and one would not expect the former to display correctly if viewed as the latter.
    If TextEdit will only save as UTF-8, it seems like your file must have some character in it, perhaps difficult to see, which is not contained in Latin 1.  If you could send me a copy of a file where you have this problem, I could take a look (tom at bluesky dot org).

  • Setting Character encoding programmaticaly?

    Hi,
    I am using Sun J2ME wireless toolkit 2.1, and i have a problem with characer encoding. I am receiving text from a .NET web service, and after some processing in the client, i send the string back.
    The problem is, the string i am sending back includes Turkish characters. These are sent as question marks instead of characters.
    I have failed to find a method that changes the character encoding used while making a web service call.
    Actually, i could not see any way to change the encoding overall. For the emulator, property file can be used, but what about the devices i'll be deploying the app? It'd be really great if someone could point me in the right direction.
    Best Regards

    Hi,
    My situation is as follows. I have .NET web services on the server side, and i am using mobile devices as clients. When i get a string from method A in web service , i can display it on the device screen without a problem. after that, if i send the same string that i've received from method A as a parameter to method B, the .NET code receives garbage instead of turkish chars.
    At the moment i am encoding turkish chars at the client side, and decoding them at the .net web server processing code.
    I'd like to try setting the encoding to utf8, but as i have written, i have not seen any way of doing this. Changing properties file for emulator is possible, but how can i do it for the target devices. I have not seen an api call for this purpose in midp or cldc docs. Thanks for your answer
    Regards

  • Absolute UTL_SMTP Character encoding Frustration

    I am using UTL_SMTP to send emails from my DB, it works great when i am sending Standard ACSII emails, but when I try to send arabic emails (windows-1256) which corresponds to my Database character encoding NLS_LANG value ARABIC_SAUDI ARABIA.AR8MSWIN1256 all arabic characters appear as question marks in my email even though i set Content-transfer-encoding to windows-1256 as shown below.
    l_maicon :=utl_smtp.open_connection('domain',25);
    utl_smtp.helo(l_maicon,'domain');
    utl_smtp.mail(l_maicon,' [email protected]');
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.data(l_maicon,
    'Content-type:text/html; charset=windows-1256' || utl_tcp.crlf ||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || [email protected] || ';' || [email protected] || utl_tcp.crlf ||
    'Subject: Notification System: ' || utl_tcp.crlf ||
    'هذه الرسالة بالعربية');
    utl_smtp.quit(l_maicon);
    I am wondering if anyone has been across a similar problem.
    Thank you,
    Hussam Galal
    [email protected]

    I was misusing the function UTL_SMTP.write_raw_data() as i was not calling the functions UTL_SMTP.open_data() before and UTL_SMTP.close_data() after it.
    Apparantly the problem consists on two parts, first part is writing the data correctly(8-bit encoding) from the database package UTL_SMTP, this is done using the function write_raw_data and the second part is telling the party handling the email about the character encoding of the email this done by adding Content-Type header in the email header as follows :
    'Content-Type: text/plain; charset="windows-1256"'
    'Content-Transfer-Encoding: 8bit'
    When I write the message using write_raw_data function the To: From: and subject(in Arabic exactly as desired) are recieved but with no body, the body always appears empty(blank)!!
    When i send the exact same message using the function data() the body is recieved but with characters in arabic appearing as question marks which is expected as this function supports only standard ascii.
    When I use the function write_data() it gives exactly same result of using data(), which i couldnt understand as it's mentioned in the documentation that this function supports 8 bit character encoding!!!
    Below is the sample pieces of code which i am using
    WRITE_RAW_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.write_raw_data(l_maicon,
    UTL_RAW.cast_to_raw(
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                             x.notification_message));
    utl_smtp.close_data(l_maicon);
    Result: Subject is recieved with correct encoding but NO body.
    DATA()
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message );
    Result: Email is recieved but any Arabic characters appear as question marks.
    WRITE_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message);
    utl_smtp.close_data(l_maicon);
    Result: Email is recieved but any Arabic characters appear as question marks.
    Which function should i be using, if write_raw_data then when is their no body, and if WRITE_data why is it not supporting 8bit characters?

  • Character encoding: Ansi, ascii, and mac, oh my!

    I'm writing a program which has to search & replace data in user-supplied Rich Text documents (.rtf). Ideally, I would like to read the whole thing into a StringBuffer, so that I can use all of the functionality built into String and StringBuffer, and so that I can easily compare with constant Strings and chars.
    The trouble that I have is with character encoding. According to the rtf spec, RTFs can be encoded in four different character encodings: "ansi", "mac", IBM PC code page 437, and IBM PC code page 850, none of which are supported by Java (see http://impulzus.sch.bme.hu/tom/szamitastechnika/file/rtfspec/rtfspec_6.htm#rtfspec_8 for the RTF spec and http://java.sun.com/j2se/1.3/docs/api/java/lang/package-summary.html#charenc for the character encodings supported by Java).
    I believe, from a bit of googling, that they are all 8 bits/character, so I could read everything into a byte array and manipulate that directly. However, that would be rather nasty. I would have to be careful with the changes that I make to the document, so that I do not insert values that do not encode correctly in the document's character encoding. Overall, a large hassle.
    So my question is - has anyone done something like this before? Any libraries that will make my job easier? Or am I missing something built into Java that will allow me to easily decode and reencode these documents?

    DrClap, thanks for the response.
    If I could map from the encodings listed above (which are given in the rtf doucment) to a java encoding name from the page that you listed, that would solve all my problems. However, there are a couple of problems:
    a) According to this page - http://orwell.ru/info/diffs.htm - ANSI is a superset of ISO-8859-1. That page isn't exactly authoritative, but I can't afford to lose data.
    b) I'm not sure what to do about the other character encodings. "mac" may correspond to "MacRoman" but that page lists a dozen or so other macintosh encodings. Gotta love crystal-clear MS documentation.

  • Some questions about configuration in MAX.

    Hello,everyone!
    I have some questions about configuration in MAX(I am a jackaroo for motion control development),I hope I can get your help.
    I use PCI-7344+UMI-7764+Servo amplifier+Servo motor,my MAX version is 4.2 and I use NI-Motion7.5
    My question as following:
    1,In Axis Configuration,for motor type,why I must select stepper but not servo?my motor is servo motor!If I select Servo,my motor can't run,I don't know why.
     If I select stepper,though motor can work but I can't test encoder in MAX.
    2,In Stepper settings,for stepper loop mode,why I must select open-loop but not close-loop?If I select close-loop,the servo motor doesn't work too.
    3,If I want my two servo motors run at different velocity,How shoud I do?It seems I just can set the same velocity in MAX for my two servo motors.
     My English is poor,Pls pardon me!I come from China.
    Thank you for your help!
    EnquanLi
    Striving is without limit!

    Hi,Jochen,
    Thank you for your kindly help!
    The manufacturer of the drive and motor that I am using now is Japan SANYO DENKI,drive type is RS1A01AA,motor type is R2AA06020FXP00.
    And I use position control mode,thehe encoder's counts per revolution is 131072.I set the electronic gear ratio to 1:1 for drive.
    Now,I can use Close-Loop to control the motor but still has some problems.When I configure it to run in closed loop mode, the motors behave strangely and never move to the target position.When I configure it to run in closed loop mode, the motors behave strangely and never move to the target position.The detail situation is as following
    1,Motor can't run.
    2, Or motor moves to a position, then moves in the same direction agian and eventually stops.
    Except for the  two points mentioned above,"Following Error" is  occured frequently,I don't know why.
    I am still not clear why I must set the motor type be stepper in MAX .
    And I have another question:what the relationship between the steps and the counts?They have the proportion relations?I notice that there are a section said like this in help document: For proper closed-loop and p-command operation, steps per revolution/counts per revolution must be in the range of 1/32,767 < steps/counts < 32,767. An incorrect counts to steps ratio can result in failure to reach the target position and erroneous closed-loop stepper operation.
    I am verry sorry I have too many questions!
    I am very appreciate for your kingly help!Thanks again!
    EnquanLi
    China
    Striving is without limit!

  • Question about the Documentat​ion Tags for Source Code

    Hello,
    I have a question about CVI's automatic source code documentation. My problem is that is seems like you need to write all documentation for a specific tag on one line. If you don't, a line break will be inserted when the documentation is displayed. Suppose I want to write a large amount of documentation for the function itself, using the HIFN tag. If I don't want linebreaks to be forced in the documentation, I need to write all this documentation on one single line, which kinda messes up my code. If I split the documentation over several HIFN tags, the documentation displayed to the user might look messed up because of all the linebreaks. Is there any escape character I can put at the end of a line, allowing me to split the documentation of several HIFN lines without forcing linebreaks in the documentation?
    Thanks!
    GEMIDIS - Innovating Display Technology
    HQ Ghent, Belgium

    This information is certainly useful. Note, however, that it can also be found in the documentation
    Tag
    Description
    /// HIFN help text
    Specifies the help text for the function. Use multiple /// HIFN tags to display help text for the function on separate lines. To separate help text with an empty line, use /// HIFN on a line by itself. You also can use HTML tags, but you must enclose the tags in <HTML><BODY></BODY></HTML> tags.
    Example
    /// HIFN SampleFunction returns the value of a control.
    int SampleFunction (int controlID, ctrlType controlType, char label[], double *value)
         SomeAction;

  • XML Character Encoding Using UTL_DBWS

    Hi,
    I have a database with WINDOWS-1252 character encoding. I'm using UTL_DBWS to call a web service method which echoes a given string. For this purpose, I do the following:
    DECLARE
        v_wsdl CONSTANT VARCHAR2(500) := 'http://myhost/myservice?wsdl';
        v_namespace CONSTANT VARCHAR2(500) := 'my.namespace';
        v_service_name CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MyService');
        v_service_port CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MySoapServicePort');
        v_ping CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'ping');
        v_wsdl_uri CONSTANT URITYPE := URIFACTORY.getURI(v_wsdl);
        v_str_request CONSTANT VARCHAR2(4000) :=
    '<?xml version="1.0" encoding="UTF-8" ?>
    <ping>
        <pingRequest>
            <echoData>Dev Team üöäß</echoData>
        </pingRequest>
    </ping>';
        v_service UTL_DBWS.SERVICE;
        v_call UTL_DBWS.CALL;
        v_request XMLTYPE := XMLTYPE (v_str_request);
        v_response SYS.XMLTYPE;
    BEGIN
        DBMS_JAVA.set_output(20000);
        UTL_DBWS.set_logger_level('FINE');
        v_service := UTL_DBWS.create_service(v_wsdl_uri, v_service_name);
        v_call := UTL_DBWS.create_call(v_service, v_service_port, v_ping);
        UTL_DBWS.set_property(v_call, 'oracle.webservices.charsetEncoding', 'UTF-8');
        v_response := UTL_DBWS.invoke(v_call, v_request);
        DBMS_OUTPUT.put_line(v_response.getStringVal());
        UTL_DBWS.release_call(v_call);
        UTL_DBWS.release_all_services;
    END;
    /Here is the SERVER OUTPUT:
    ServiceFacotory: oracle.j2ee.ws.client.ServiceFactoryImpl@a9deba8d
    WSDL: http://myhost/myservice?wsdl
    Service: oracle.j2ee.ws.client.dii.ConfiguredService@c881d39e
    *** Created service: -2121202561 - oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220 ***
    ServiceProxy.get(-2121202561) = oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220
    Collection Call info: port={my.namespace}MySoapServicePort, operation={my.namespace}ping, returnType={my.namespace}PingResponse, params count=1
    setProperty(oracle.webservices.charsetEncoding, UTF-8)
    dbwsproxy.add.map: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.lookup.map: ns, my.namespace
    createElement(ns:ping,null,my.namespace)
    dbwsproxy.add.soap.element.namespace: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.element.node.child.3: 1, null
    createElement(echoData,null,null)
    dbwsproxy.text.node.child.0: 3, Dev Team üöäß
    request:
    <ns:ping xmlns:ns="my.namespace">
       <pingRequest>
          <echoData>Dev Team üöäß</echoData>
       </pingRequest>
    </ns:ping>
    Jul 8, 2008 6:58:49 PM oracle.j2ee.ws.client.StreamingSender _sendImpl
    FINE: StreamingSender.response:<?xml version = '1.0' encoding = 'UTF-8'?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header/><env:Body><ns0:pingResponse xmlns:ns0="my.namespace"><pingResponse><responseTimeMillis>0</responseTimeMillis><resultCode>0</resultCode><echoData>Dev Team üöäß</echoData></pingResponse></ns0:pingResponse></env:Body></env:Envelope>
    response:
    <ns0:pingResponse xmlns:ns0="my.namespace">
       <pingResponse>
          <responseTimeMillis>0</responseTimeMillis>
          <resultCode>0</resultCode>
          <echoData>Dev Team üöäß</echoData>
       </pingResponse>
    </ns0:pingResponse>As you can see the character encoding is broken in the request and in the response, i.e. the SOAP encoder does not take into consideration the UTF-8 encoding.
    I tracked down the problem to the method oracle.jpub.runtime.dbws.DbwsProxy.dom2SOAP(org.w3c.dom.Node, java.util.Hashtable); and more specifically to the calls of oracle.j2ee.ws.saaj.soap.soap11.SOAPFactory11.
    My question is: is there a way to make the SOAP encoder use the correct character encoding?
    Thanks a lot in advance!
    Greetings,
    Dimitar

    I found a workaround of the problem:
        v_response := XMLType(v_response.getBlobVal(NLS_CHARSET_ID('CHAR_CS')), NLS_CHARSET_ID('AL32UTF8'));Ugly, but I'm tired of decompiling and debugging Java classes ;)
    Greetings,
    Dimitar

Maybe you are looking for

  • Transferring date from G5 PPc iMac to New Intel iMac?

    Can anyone advise or point me in the right direction? I have tried searching Forums, but cannot find what I want. I have set up my new iMac 10.6.2, loaded and updated all my programs, and now want to transfer my data from my old iMac. There is a lot

  • When I export a group of jpg's from RAW files, only the first one retains it's title and caption.

    If I try to export a handful of images at the same time, the title and caption is stripped out at export. This happens on 5, 15 or 50 images. If I export them one by one the information is retained. It seems to me that only the first or one image can

  • Help with Subscriber Services Reset

    What are Subscriber Services on iPad? and where would I find the 26 digit Authentication Key?

  • ODT with Oracle 8i

    Which functions will work / not work with 8i as the back end database?

  • Nseries Voice tags

    I synchronise N70 and N80 with outlook; Following syunchonisation, the menu "play voice tag" disappears and no call with voice tag is available; the phone understands voice commande only; If I reset the phone and insert contacts either manualy or by