Stax & sax writing character encoding?

hello,
i need to rip html tags & put them in xml files ...
before i write with
writer.writeStartElement(docXmlTagNames12);
writer.writeCharacters(deTags[i]);
System.out.println("writing tag:"+deTags[i]);
writer.writeEndElement();
writing tag gives: writing tag:Leden van het Comit��
and in the xml file the "��" is scrambled:
<notificatieAan>Leden van het Comit��notificatieAan>
here it shows as a jaanses sign but with me it shows as a small rectangle ...
anybody knows how i should deal with this problem???
thanks

this is the whole code:
try{
XMLOutputFactory factory=XMLOutputFactory.newInstance();
XMLStreamWriter writer=factory.createXMLStreamWriter(
new java.io.FileWriter(xmlFile.toString()));
writer.writeStartDocument();
writer.writeStartElement(rootTag1);
for (int i=1;i<docXmlTagNames12.length;i++){
     writer.writeStartElement(docXmlTagNames12);
     writer.writeCharacters(deTags[i]);
     System.out.println("writing tag:"+deTags[i]);
     writer.writeEndElement();               
writer.writeEndElement();          
writer.writeEndDocument();
writer.close();
catch (XMLStreamException e){
e.printStackTrace();
catch (IOException e){
e.printStackTrace();

Similar Messages

  • When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    Thanks for that information!
    I'm sure I will be calling AppleCare, but the problem is, they charge for the phone calls don't they? Because I don't have money to be spending to be on the phone with a support service.
    In other things, it seemed like the only time my MacBook was working was when I had Snow Leopard without the 10.6.8 update download that was supposed to be done to prepare for OS X Lion.
    When I look at the information of my HD it says that I have 10.6.8 but that was the install that it claimed to have failed and caused me to restart resulting in all of the repeated problems.
    Also, because my computer is currently down, and I've lost all files how would that effect the use of my iPhone? Because if it doesn't get fixed by the time OS 5 is released, how would I be able to upgrade?!

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • Stax reading /writing need help from xml guru plz

    hi, i have been told that stax reading /writing should involve no overhead and that is why i use it and i am now able to write my large data to file, but using my reader i seem to run out of memory, using netbeans profiler i ahve found that char[] seems to be the problem,
    by backtracing i ahve found that javax.xml.parser.SAXParser.parse calls the xerces packages which eventually leads to the char[ ], now my code for my reader is attatched here...
    package utilities;
    import Categorise.Collection;
    import Categorise.Comparison;
    import Categorise.TestCollection;
    import java.io.IOException;
    import javax.xml.parsers.*;
    import org.xml.sax.SAXException;
    import org.xml.sax.helpers.DefaultHandler;
    import org.xml.sax.Attributes;
    import java.io.File;
    import java.util.ArrayList;
    import java.util.List;
    import measures.Protocol;
    * @author dthomas
    public class XMLParser extends DefaultHandler
        static Collection collection = new Collection();
        List<Short> cList;
        List<Comparison> comparisonList;
        File trainFileName;
        File testFileName;
        TestCollection tc;
        List<TestCollection> testCollectionList;
        List<File> testFileNameList = new ArrayList<File>();
        List<File> trainFileNameList = new ArrayList<File>();
        boolean allTrainsAdded = false;
        Protocol protocol;
        List<File> trainingDirList;
        File testingDir;
        int counter = 0;
        File[ ] trainingDirs;
        File[ ] trainingFileNames;
        File[ ] testFileNames;
        TestCollection[ ] testCollections;
        Comparison[ ] comparisons;
        Comparison c;
        short[ ] cCounts;
        String order;
        String value;
        File trainDir;
        /** Creates a new instance of XMLParser */
        public XMLParser() {
        public static Collection read( File aFile )
            long startTime = System.currentTimeMillis();
            System.out.println( "Reading XML..." );
            SAXParserFactory spf = SAXParserFactory.newInstance();
            SAXParser sp;
            try {
                sp = spf.newSAXParser();
                sp.parse( aFile, new XMLParser() );
            } catch (IOException ex) {
                ex.printStackTrace();
            } catch (SAXException ex) {
                ex.printStackTrace();
            } catch (ParserConfigurationException ex) {
                ex.printStackTrace();
            long endTime = System.currentTimeMillis();
            long totalTime = ( endTime - startTime ) / 1000;
            System.out.println( "Done..."  + totalTime + " seconds" );
            return collection;
        public void startElement(String uri,String localName,String qName, Attributes attributes)
            if( qName.equals( "RE" ) )
                testCollectionList = new ArrayList<TestCollection>();
            else if( qName.equals( "p") )
                boolean isConcatenated = new Boolean( attributes.getValue( "c" ) );
                boolean isStatic = new Boolean( attributes.getValue( "s" ) );
                protocol = new Protocol( isConcatenated, isStatic );
            else if( qName.equals( "trdl" ) )
                trainingDirList = new ArrayList<File>();
            else if( qName.equals( "trd" ) )
                trainDir = new File( attributes.getValue( "fn" ) );
                trainingDirList.add( trainDir );
            else if( qName.equals( "td" ) )
                testingDir = new File( attributes.getValue( "fn" ) );
            else if( qName.equals( "TC" ) )
                counter++;
                System.out.println( counter );
                comparisonList = new ArrayList<Comparison>();
                testFileName = new File( attributes.getValue( "tfn" ) );
                testFileNameList.add( testFileName );
                tc = new TestCollection( );
                tc.setTestFileName( testFileName );
            else if ( qName.equals( "r" ) )
             order = attributes.getValue( "o" );
                value = attributes.getValue( "v" );
                cList.add( Short.parseShort( order ), new Short( value ) );
            else if( qName.equals( "c" ) )
                cList = new ArrayList<Short>();
                trainFileName = new File( attributes.getValue( "trfn" ) );
                if( !allTrainsAdded )
                    trainFileNameList.add( trainFileName );
        public void characters(char []ch,int start,int length)
            //String str=new String(ch,start,length);
            //System.out.print(str);
        public void endElement(String uri,String localName,String qName)
            if (qName.equals( "c") )
                allTrainsAdded = true;
                cCounts = new short[ cList.size() ];      
                for( int i = 0; i < cCounts.length; i++ )
                    cCounts[ i ] = cList.get( i );
                c = new Comparison( trainFileName, tc );
                c.setcCounts( cCounts );
                this.comparisonList.add( c );
            else if( qName.equals( "TC" ) )
                comparisons = new Comparison[ comparisonList.size() ];
                comparisonList.toArray( comparisons );           
                tc.setComparisons( comparisons );
                testCollectionList.add( tc );
            else if( qName.equals( "RE" ) )
                testCollections = new TestCollection[ testCollectionList.size() ];
                testCollectionList.toArray( testCollections );
                collection.setTestCollections( testCollections );
                testFileNames = new File[ testFileNameList.size() ];
                testFileNameList.toArray( testFileNames );
                collection.setTestingFiles( testFileNames );
                //String[ ] testCategories = new String[ testCategoryList.size() ];
                //testCategoryList.toArray( testCategories );
                //collection.setTestCategories( testCategories );
                trainingFileNames = new File[ trainFileNameList.size() ];
                trainFileNameList.toArray( trainingFileNames );
                collection.setTrainingFiles( trainingFileNames );
                //String[ ] trainingCategories = new String[ trainCategoryList.size() ];
                //trainCategoryList.toArray( trainingCategories );
                //collection.setTrainingCategories( trainingCategories );
                collection.setProtocol( protocol );
                trainingDirs = new File[ trainingDirList.size() ];
                trainingDirList.toArray( trainingDirs );           
                collection.setTrainingDirs( trainingDirs );
                collection.setTestingDir( testingDir );
         //else
             //System.out.println("End element:   {" + uri + "}" + localName);
    }i thought it may have been a recursive problme, hence having so many instance variables instead of local ones but that hasn't helped.
    all i need at the end of this is a Collection which holds an array of testCollections, which holds an array of cCounts and i was able to hold all of this in memory as all i am loading is what was in memory before it was written.
    can someone plz help
    ps when i use tail in unix to read the end of the xml file it doesnt work correctly as i cannot specify the number of lines to show, it shows all of the file as thought it is not split into lines or contains new line chars or anything, it is stored as one long stream, is this correct??
    here is a snippet of the xml file:
    <TC tfn="
    /homedir/dthomas/Desktop/4News2/output/split3/alt.atheism/53458"><c trfn="/homed
    ir/dthomas/Desktop/4News2/output/split0/alt.atheism/53586"><r o="0" v="0"></r><r
    o="1" v="724"></r><r o="2" v="640"></r><r o="3" v="413"></r><r o="4" v="245"></
    r><r o="5" v="148"></r><r o="6" v="82"></r><r o="7" v="52"></r><r o="8" v="40"><
    /r><r o="9" v="30"></r><r o="10" v="22"></r><r o="11" v="16"></r><r o="12" v="11
    "></r><r o="13" v="8"></r><r o="14" v="5"></r><r o="15" v="2"></r></c><c trfn="/
    homedir/dthomas/Desktop/4News2/output/split0/alt.atheism/53495"><r o="0" v="0"><
    /r><r o="1" v="720"></r><r o="2" v="589"></r><r o="3" v="349"></r><r o="
    please if anyone has any ideas from this code why a char[] would use 50% of the memory taken, and that the average age seems to show that the same one continues to grow..
    thanks in advance
    danny =)

    hi, i am still having lo luck with reading the xml data back into memory, as i have said before, the netbeans profiler is telling me it is a char[] that is using 50% of the memory but i cannot see how a char[] is created, my code doesn't so it must be the xml code...plz help

  • Setting character encoding in a Writer

    Hi,
    Is this possible?
    I'm reading from an InputStream (stream from a text file) using an InputStreamReader wrapped in a BufferedReader. I set the character encoding in the InputStreamReader.
    Then I read line by line - making some modifications in the text and writing (appending)
    each line into a new file (using a FileWriter wrapped in a BufferedWriter).
    I can't seem to set the character encoding in any of the Writer.
    Firstly, do I need to specify the encoding to 'keep' the correct encoding?
    And if so, what writer should I use?

    Yes you need to specify the encoding. Use an OutputStreamWriter in much the same way as you used the InputStreamReader.

  • Org.xml.sax.SAXParseException: The encoding "ISO8859-1" is not supported.

    Please help. I am trying to parse an XMl document using de DOMParser:
    try {
    DOMParser parser = new DOMParser();
    parser.parse("myFile.xml");
    document = parser.getDocument();     
    } catch (SAXException e) {
    System.err.println (e);
    } catch (IOException e) {
    System.err.println (e);
    I use the encouding "ISO8859-1" in myFile.xml:
    <?xml version='1.0' encoding='ISO8859-1' ?>
    But if I execute my program I get the following error:
    org.xml.sax.SAXParseException: The encoding "ISO8859-1" is not supported.
    How I can I resolve that?
    Many thanks!
    Yassin

    The Java API documentation (the bit about character encodings) mentions "ISO-8859-1" but not "ISO8859-1". Try that instead?

  • Absolute UTL_SMTP Character encoding Frustration

    I am using UTL_SMTP to send emails from my DB, it works great when i am sending Standard ACSII emails, but when I try to send arabic emails (windows-1256) which corresponds to my Database character encoding NLS_LANG value ARABIC_SAUDI ARABIA.AR8MSWIN1256 all arabic characters appear as question marks in my email even though i set Content-transfer-encoding to windows-1256 as shown below.
    l_maicon :=utl_smtp.open_connection('domain',25);
    utl_smtp.helo(l_maicon,'domain');
    utl_smtp.mail(l_maicon,' [email protected]');
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.data(l_maicon,
    'Content-type:text/html; charset=windows-1256' || utl_tcp.crlf ||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || [email protected] || ';' || [email protected] || utl_tcp.crlf ||
    'Subject: Notification System: ' || utl_tcp.crlf ||
    'هذه الرسالة بالعربية');
    utl_smtp.quit(l_maicon);
    I am wondering if anyone has been across a similar problem.
    Thank you,
    Hussam Galal
    [email protected]

    I was misusing the function UTL_SMTP.write_raw_data() as i was not calling the functions UTL_SMTP.open_data() before and UTL_SMTP.close_data() after it.
    Apparantly the problem consists on two parts, first part is writing the data correctly(8-bit encoding) from the database package UTL_SMTP, this is done using the function write_raw_data and the second part is telling the party handling the email about the character encoding of the email this done by adding Content-Type header in the email header as follows :
    'Content-Type: text/plain; charset="windows-1256"'
    'Content-Transfer-Encoding: 8bit'
    When I write the message using write_raw_data function the To: From: and subject(in Arabic exactly as desired) are recieved but with no body, the body always appears empty(blank)!!
    When i send the exact same message using the function data() the body is recieved but with characters in arabic appearing as question marks which is expected as this function supports only standard ascii.
    When I use the function write_data() it gives exactly same result of using data(), which i couldnt understand as it's mentioned in the documentation that this function supports 8 bit character encoding!!!
    Below is the sample pieces of code which i am using
    WRITE_RAW_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.write_raw_data(l_maicon,
    UTL_RAW.cast_to_raw(
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                             x.notification_message));
    utl_smtp.close_data(l_maicon);
    Result: Subject is recieved with correct encoding but NO body.
    DATA()
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message );
    Result: Email is recieved but any Arabic characters appear as question marks.
    WRITE_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message);
    utl_smtp.close_data(l_maicon);
    Result: Email is recieved but any Arabic characters appear as question marks.
    Which function should i be using, if write_raw_data then when is their no body, and if WRITE_data why is it not supporting 8bit characters?

  • Character encoding: Ansi, ascii, and mac, oh my!

    I'm writing a program which has to search & replace data in user-supplied Rich Text documents (.rtf). Ideally, I would like to read the whole thing into a StringBuffer, so that I can use all of the functionality built into String and StringBuffer, and so that I can easily compare with constant Strings and chars.
    The trouble that I have is with character encoding. According to the rtf spec, RTFs can be encoded in four different character encodings: "ansi", "mac", IBM PC code page 437, and IBM PC code page 850, none of which are supported by Java (see http://impulzus.sch.bme.hu/tom/szamitastechnika/file/rtfspec/rtfspec_6.htm#rtfspec_8 for the RTF spec and http://java.sun.com/j2se/1.3/docs/api/java/lang/package-summary.html#charenc for the character encodings supported by Java).
    I believe, from a bit of googling, that they are all 8 bits/character, so I could read everything into a byte array and manipulate that directly. However, that would be rather nasty. I would have to be careful with the changes that I make to the document, so that I do not insert values that do not encode correctly in the document's character encoding. Overall, a large hassle.
    So my question is - has anyone done something like this before? Any libraries that will make my job easier? Or am I missing something built into Java that will allow me to easily decode and reencode these documents?

    DrClap, thanks for the response.
    If I could map from the encodings listed above (which are given in the rtf doucment) to a java encoding name from the page that you listed, that would solve all my problems. However, there are a couple of problems:
    a) According to this page - http://orwell.ru/info/diffs.htm - ANSI is a superset of ISO-8859-1. That page isn't exactly authoritative, but I can't afford to lose data.
    b) I'm not sure what to do about the other character encodings. "mac" may correspond to "MacRoman" but that page lists a dozen or so other macintosh encodings. Gotta love crystal-clear MS documentation.

  • XML Character Encoding Using UTL_DBWS

    Hi,
    I have a database with WINDOWS-1252 character encoding. I'm using UTL_DBWS to call a web service method which echoes a given string. For this purpose, I do the following:
    DECLARE
        v_wsdl CONSTANT VARCHAR2(500) := 'http://myhost/myservice?wsdl';
        v_namespace CONSTANT VARCHAR2(500) := 'my.namespace';
        v_service_name CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MyService');
        v_service_port CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'MySoapServicePort');
        v_ping CONSTANT UTL_DBWS.QNAME := UTL_DBWS.to_qname(v_namespace, 'ping');
        v_wsdl_uri CONSTANT URITYPE := URIFACTORY.getURI(v_wsdl);
        v_str_request CONSTANT VARCHAR2(4000) :=
    '<?xml version="1.0" encoding="UTF-8" ?>
    <ping>
        <pingRequest>
            <echoData>Dev Team üöäß</echoData>
        </pingRequest>
    </ping>';
        v_service UTL_DBWS.SERVICE;
        v_call UTL_DBWS.CALL;
        v_request XMLTYPE := XMLTYPE (v_str_request);
        v_response SYS.XMLTYPE;
    BEGIN
        DBMS_JAVA.set_output(20000);
        UTL_DBWS.set_logger_level('FINE');
        v_service := UTL_DBWS.create_service(v_wsdl_uri, v_service_name);
        v_call := UTL_DBWS.create_call(v_service, v_service_port, v_ping);
        UTL_DBWS.set_property(v_call, 'oracle.webservices.charsetEncoding', 'UTF-8');
        v_response := UTL_DBWS.invoke(v_call, v_request);
        DBMS_OUTPUT.put_line(v_response.getStringVal());
        UTL_DBWS.release_call(v_call);
        UTL_DBWS.release_all_services;
    END;
    /Here is the SERVER OUTPUT:
    ServiceFacotory: oracle.j2ee.ws.client.ServiceFactoryImpl@a9deba8d
    WSDL: http://myhost/myservice?wsdl
    Service: oracle.j2ee.ws.client.dii.ConfiguredService@c881d39e
    *** Created service: -2121202561 - oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220 ***
    ServiceProxy.get(-2121202561) = oracle.jpub.runtime.dbws.DbwsProxy$ServiceProxy@afb58220
    Collection Call info: port={my.namespace}MySoapServicePort, operation={my.namespace}ping, returnType={my.namespace}PingResponse, params count=1
    setProperty(oracle.webservices.charsetEncoding, UTF-8)
    dbwsproxy.add.map: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.lookup.map: ns, my.namespace
    createElement(ns:ping,null,my.namespace)
    dbwsproxy.add.soap.element.namespace: ns, my.namespace
    Attribute 0: my.namespace: xmlns:ns, my.namespace
    dbwsproxy.element.node.child.3: 1, null
    createElement(echoData,null,null)
    dbwsproxy.text.node.child.0: 3, Dev Team üöäß
    request:
    <ns:ping xmlns:ns="my.namespace">
       <pingRequest>
          <echoData>Dev Team üöäß</echoData>
       </pingRequest>
    </ns:ping>
    Jul 8, 2008 6:58:49 PM oracle.j2ee.ws.client.StreamingSender _sendImpl
    FINE: StreamingSender.response:<?xml version = '1.0' encoding = 'UTF-8'?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header/><env:Body><ns0:pingResponse xmlns:ns0="my.namespace"><pingResponse><responseTimeMillis>0</responseTimeMillis><resultCode>0</resultCode><echoData>Dev Team üöäß</echoData></pingResponse></ns0:pingResponse></env:Body></env:Envelope>
    response:
    <ns0:pingResponse xmlns:ns0="my.namespace">
       <pingResponse>
          <responseTimeMillis>0</responseTimeMillis>
          <resultCode>0</resultCode>
          <echoData>Dev Team üöäß</echoData>
       </pingResponse>
    </ns0:pingResponse>As you can see the character encoding is broken in the request and in the response, i.e. the SOAP encoder does not take into consideration the UTF-8 encoding.
    I tracked down the problem to the method oracle.jpub.runtime.dbws.DbwsProxy.dom2SOAP(org.w3c.dom.Node, java.util.Hashtable); and more specifically to the calls of oracle.j2ee.ws.saaj.soap.soap11.SOAPFactory11.
    My question is: is there a way to make the SOAP encoder use the correct character encoding?
    Thanks a lot in advance!
    Greetings,
    Dimitar

    I found a workaround of the problem:
        v_response := XMLType(v_response.getBlobVal(NLS_CHARSET_ID('CHAR_CS')), NLS_CHARSET_ID('AL32UTF8'));Ugly, but I'm tired of decompiling and debugging Java classes ;)
    Greetings,
    Dimitar

  • XML parser not detecting character encoding

    Hi,
    I am using Jdeveloper 9.0.5 preview and the same problem is happening in our production AS 9.0.2 release.
    The character encoding of an xml document is not correctly being detected by the oracle v2 parser even though the xml declaration correctly contains
    <?xml version="1.0" encoding="ISO-8859-1" ?>
    instead it treats the document as UTF8 encoding which is fine until a document comes along with an extended character which then causes a
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:448)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2023)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:972)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2589)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:485)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:192)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:144)
    as you can see it is explicitly casting the XMLUTF8Reader to perform the read.
    I can get around this by hard coding the xml input stream to be processed by a reader
    XMLSource = new StreamSource(new InputStreamReader(XMLInStream,"ISO-8859-1"));
    however the manual documents that the character encoding is automatically picked up from the xml file and casting into a reader is not necessary, so I should be able to write
    XMLSource = new StreamSource(XMLInStream)
    Does anyone else experience this same problem?
    having to hardcode the encoding causes my software to lose flexibility.
    Jarrod Sharp.

    An XML document should be created with 'ISO-8859-1' encoding to be parsed as 'ISO-8859-1' encoding.

  • What's the difference of character encoding between 1.4.0and1.4.2 in Linux

    As i find, the character encoding about chinese in jdk1.4.2 no langer the same of jdk1.4.0.
    In jdk1.4.0, the character encoding used the "file.encoding" system property, we often set the
    property with "gb2312".
    But in jdk1.4.2, i find that the default character encoding no longer used the "file.encoding" system property.
    Who knows the reason?
    Test Program:
    public class B{
    public static void main(String args[]) throws Exception{
    byte [] bytes = new byte[]{(byte)0xD6,(byte)0xD0,(byte)0xCE,(byte)0xC4};
    String s1 = new String(bytes);
    String s2 = new String(bytes,System.getProperty("file.encoding"));
    System.out.println("s1="+s1+" , s2="+s2);
    System.out.println("s1.length=" + s1.length() + " , s2.length="+s2.length());
    run four times and the result list:
    [root@app15 component]# /usr/local/j2sdk1.4.0/bin/java -Dfile.encoding=ISO-8859-1 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=4 , s2.length=4
    [root@app15 component]# /usr/local/j2sdk1.4.0/bin/java -Dfile.encoding=gb2312 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=2 , s2.length=2
    [root@app15 component]# /usr/local/j2sdk1.4.2/bin/java -Dfile.encoding=ISO-8859-1 -cp . B
    s1=&#20013;&#25991; , s2=&#20013;&#25991;
    s1.length=4 , s2.length=4
    [root@app15 component]# /usr/local/j2sdk1.4.2/bin/java -Dfile.encoding=gb2312 -cp . B
    s1=&#20013;&#25991; , s2=??
    s1.length=4 , s2.length=2
    [root@app15 component]#

    I don't know for sure, but:
    -- The API documentation for String says that "new String(byte[])" uses "the platform's default charset".
    -- The API documentation for Charset says "The default charset is determined during virtual-machine startup and typically depends upon the locale and charset being used by the underlying operating system."
    You'll notice that it doesn't say anything about using the file.encoding system value, so presumably (based on your experiments) it doesn't. I did a search for "java default charset" and didn't find anything specific, but this site says "As of Java 1.4.1, the default Charset varies from platform to platform" and suggests you explicitly hard-code your charset. I would agree with that.

  • Why differing Character Encoding and how to fix it?

    I have PRS-950 and PRS-350 readers, both since 2011.  
    In the last year, I've been getting books with Character Encoding that is not easy to read.  In playing around with my browsers and View -> Encoding menus, I have figured out that it has something to do with the character encoding within the epub files.
    I buy books from several ebook stores and I borrow from the library.
    The problem may be the entire book, but it is usually restricted to a few chapters, with rare occasion where the encoding changes within a chapter.  Usually it is for a whole chapter, not part, and it can be seen in chapters not consecutive to each other.
    It occurs whether the book is downloaded directly to my 950 reader or if I load it to either reader from my computer(s), which are all Mac OS X of several versions fom 10.4 to Mountain Lion.  SInce it happens when the book is downloaded directly, I figure the operating system of my computer is not relevant.
    There are several publishers involved, though Baen (no DRM ebooks) has not so far been one of them.
    If I look at the books with viewers on the computer, the encoding is the same.  I've read them in Calibre, in the Sony Reader App, and in Adobe Digital Editions 2.0.  It's always the same.
    I believe the encoding is inherent to the files.  I would like to fix this if I can to make the books I've purchased, many of them in paper and electronically, more enjoyable to read on my readers.
    Example: I’ve is printed instead of I've.
    ’ for apostrophe
    “ the opening of a quotation,
    â€?  for closing the quotation,
    and I think — is for a hyphen.
    When a sentence had “’m  for " 'm at the beginning of a speech (when the character was slurring his words) it took me a while to figure out how it was supposed to read.
    “’Sides, â€™tis only for a moon.  That ain’t long.â€?
    was in one recent book.
    Translation: " 'Sides, 'tis only for a moon. That ain't long."
    See what I mean? 
    Any ideas?

    Hi
    I wonder if it’s possible to download a free ebook with such issue, in order to make some “tests”.
    Perhaps it’s possible, on free ebooks (without DRM), to add fonts by using softwares like Sigil.

  • How can I tell what character encoding is sent from the browser?

    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

    From what I understand (I haven't used it yet) 6.1 supports the 2.3
    servlet spec. That should have a method to set the encoding.
    Otherwise, I don't think you can support multiple encodings in one
    instance of WebLogic.
    From what I know browsers don't give any indication at all about what
    encoding they're using. I've read some chatter about the HTTP spec
    being changed so it's always UTF-8, but that's a Some Day(TM) kind of
    thing, so you're stuck with all the stuff out there now which doesn't do
    everything in UTF-8.
    Sorry for the bad news, but if it makes you feel any better I've felt
    your pain. Oh, and trying to process multipart/form-data (file upload)
    forms is even worse and from what I've seen the API that people talk
    about on these newsgroups assumes everything is ISO-8859-1.
    Emmy Lau wrote:
    >
    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

  • Reading Advance Queuing with XMLType payload and JDBC Driver character encoding

    Hi
    I've got a problem retrieving the message from the queue with XMLType payload in Java.
    It was working fine in 10g database but after the switch to 11g it returns corrupted string instead of real XML message. Database NLS_LANG setting is AL32UTF8
    It is said that JDBC driver should deal with that automatically but it obviously don't in this case. When I dequeue the message using database functionality (DBMS_AQ package) it looks fine but not when using JDBC driver so Ithink it is character encoding issue or so. The message itself is enqueued by the database and supposed to be retrieved by dedicated EJB.
    Driver file used: ojdbc6.jar
    Additional libraries: aqapi.jar, xdb.jar
    All file taken from 11g database installation.
    What shoul dI do to get the xml message correctly?

    Do you mean NLS_LANG is AL32UTF8 or the database character set is AL32UTF8? What is the database character set (SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET')?
    Thanks,
    Sergiusz

  • Where is the Character Encoding option in 29.0.1?

    After the new layout change, the developer menu don't have Character Encoding options anymore where the hell is it? It's driving me mad...

    You can also this access the Character Encoding via the View menu (Alt+V C)
    See also:
    *Options > Content > Fonts & Colors > Advanced > Character Encoding for Legacy Content

Maybe you are looking for

  • Recon Account defaulted in customer master

    HI When we are trying to create customer master in XD01/FD01, the reconciliation account is automatically showing 155230, and when we tried to change to our reconciliation account even after saving customer master, it is again going default to 155230

  • A bug or by design: locked out in the wrong language

    I use two languages on my Mac: English and Russian. I was writing an email in Russian when I had to leave my computer - so I locked it. When i came back, I tried to log back in. A shocking surprise: Mac was using a Russian layout and I have found no

  • My Torch 9800 won't restart

    Hi ! After an application installation my BB Torch 9800 restart and won't turn on! The loading bar stop at 80% of start loading.  Safe Mode starting don't work, So I plugged it with the BB Desktop Software (7.1.0.41), For the repair mode. The softwar

  • CST is not getting picked up in sales order

    Dear Experts, We are following TAXINN procedure and we have maintained condition record for excise condition type - JEXP, JECS & JA1X and tax condition types - JIVC and JIVP. But in sales order, Excise condition types and Vat is getting picked automa

  • Cisco Call manager 7.2 through ASA firewall

    Hi, We have a part of our building that we have sold to another company. We still have to provide them with some resources until they can install their own network. We have a 6500 switch there and we are going to implement a ASA in between and lock d