(nokia N8) Character encoding - reduced support (n...

I'm from Poland, and in my language i use special letters like ó,ż,ź,ą,ę. When i want write a massage and use one of above letters, my message is shorter 90 signs. In older Nokia's phone i change settings text messages (character encoding:full support=>reduced support). When i change the same settings in Nokia N8, this settings doesn't work! I always see shorter message, strangest is that when i choose "conversations" i don't have any problems, everything works fine. When i choose "new message" i can't use reduced support (this option i on, but not working) My software version:011.012 Software version date: 2010-09-18 Release: PR1.0 Custom Version: 011.012.00.01 Custom version date: 2010-09-18 Language set: 011.012.03.01 Product code: 0599823

Talking about product code changes is prohibited in this forum. It is unofficial and is grounds for Nokia to refuse to repair or service your phone in any way.
If you find my post helpful please click the green star on the left under the avatar. Thanks.

Similar Messages

  • Character encoding format in j2me

    Hi,
    I am new to this forum.
    I have some queries regarding Character encoding support in j2me.
    1. What is the character encoding format supported in j2me?
    2. Is it varies from device to device or same in all j2me devices?
    3. Whether all j2me devices support UTF-8 format?
    4. Are some devices sopport UTF-16 (or UTF-16BE/UTF-16LE)?
    5. If a device supports UTF-8 scheme, can we assume that it will support all the languages(i am somehow concerned for Chineese)?
    Eagerly waiting for the feedbacks :)

    Not all devices has support for even UTF-8.
    Use this class I wrote for my projects. This is implementation of Reader so you may easy use it anywhere.
    * Created on 12.03.2008 14:21
    package misc;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.Reader;
    * Reader of UTF-8 encoded string from bytes
    * @author RFK
    public class UTF8InputStreamReader extends Reader{
      InputStream is;
      /** Creates a new instance of UTF8InputStreamReader */
      public UTF8InputStreamReader(InputStream is) {
        this.is=is;
      public int read(char[] cbuf, int off, int len) throws IOException {
        int b,b2,b3, r=0;
        while (len>0){
          b = is.read();
          if (b<0) break;
          if (b<128)
            cbuf[off]= (char)b;
          else if (b<224){
            b2 = is.read();
            cbuf[off]= (char)(((b& 0x1F) << 6) | (b2 & 0x3F));
          } else {
            b2 = is.read();
            b3 = is.read();
            cbuf[off]=  (char)(((b & 0x0F) << 12) | ((b2 & 0x3F) << 6) | (b3 & 0x3F));
          r++;
          off++;
          len--;
        return r;
      public void close() throws IOException {
        is.close();
    }Edited by: MagDel on Jun 1, 2009 10:33 PM

  • Nokia 5500 sport RM 86 - character reduced support...

    Hello folks,
    I bought 5500 sport gray, branded for operator Orange France.
    I was disatisfied with it, and change product code to Balkans.
    I got Serbian language.
    Product code is: 0533823.
    And with NSU, reinstalled firmware version 4.60.
    I set serbian for writing language, then turned on character reduced support.
    However, character support for Serbian does not work at all.
    When I type our special coded characters such as: č, ć, ž, š and đ, number of remaining characters automaticly falling from 160 to 69.
    I was curius and changed product code to Euro1, and character reduced support worked.
    I back to balkans code, and still does not work...
    I really like t9, and I cannot write SMS messages...
    Experts, please, tell me, I found 12 product codes for Balkans but dont know which to put in function to character reduced support work...
    Thank you verry much.

    Talking about product code changes is prohibited in this forum. It is unofficial and is grounds for Nokia to refuse to repair or service your phone in any way.
    If you find my post helpful please click the green star on the left under the avatar. Thanks.

  • ITunes support for foriegn character encoding

    A friend burned two mix cds for me to listen two on my move back to the US from Korea. The songs are korean and have korean title/album information. I thought I would import the songs into my iBook. When I add them to my library, however, a majority of them have unintelligible song information. Only about 25-30% of the songs import successfully in korean font.
    Finder reads the cd no problems. Looking through the disc shows all information clearly. Drop them into iTunes, however (or selecting "add to library"), and they get scrambled.
    I'm guessing this is a character encoding issue. I don't know where my friend got the tracks from, so I'll have to assume he got them from sources which copied them using different encoding methods. But if Finder can support them all, why can't iTunes? Is there a way I can adjust character support in iTunes? Or should I be looking at something else?
    1.2ghz iBook G4 1.25gb RAM Mac OS X (10.4.5) iTunes 6.0.3

    Try setting the Encoding of your Outputstream to UTF-8.

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • How can I tell what character encoding is sent from the browser?

    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

    From what I understand (I haven't used it yet) 6.1 supports the 2.3
    servlet spec. That should have a method to set the encoding.
    Otherwise, I don't think you can support multiple encodings in one
    instance of WebLogic.
    From what I know browsers don't give any indication at all about what
    encoding they're using. I've read some chatter about the HTTP spec
    being changed so it's always UTF-8, but that's a Some Day(TM) kind of
    thing, so you're stuck with all the stuff out there now which doesn't do
    everything in UTF-8.
    Sorry for the bad news, but if it makes you feel any better I've felt
    your pain. Oh, and trying to process multipart/form-data (file upload)
    forms is even worse and from what I've seen the API that people talk
    about on these newsgroups assumes everything is ISO-8859-1.
    Emmy Lau wrote:
    >
    Hi,
    I am developing a servlet which supposed to be used to send and receive message
    in multiple character set. However, I read from the previous postings that each
    Weblogic Server can only support one input character encoding. Is that true?
    And do you have any suggestions on how I can do what I want. For example, I
    have a HTML form for people to post any comments (they may post in any characterset,
    like ShiftJIS, Big5, Gb, etc). I need to know what character encoding they are
    using before I can read that correctly in the servlet and save in the database.

  • Some japanese character encoded to "?". Please help me..

    My system is below listed.
    J2SE 1.4.1
    MySql Ver 11.18 Distrib 3.23.52,
    Resin 2.1.4
    Java application load html document encoded 'SHIFT_JIS' using HtmlURLConnection.
    And read the document in 'SHIFT_JIS'.
    Almost it appears properly but some of characters printed in '?'.
    hm....
    I will show my source code.
    private String readDocument(URL url) {
    String METHOD_NM = ".readDocument()";
    try {
    HttpURLConnection URLCon = (HttpURLConnection)url.openConnection();
    BufferedReader in = new BufferedReader( new InputStreamReader(url.openStream(), "SJIS"));
    String inputLine;
    On my web server, input the character(printed '?') on textbox in IE6(japanese language pack). and submit. then the character translated the code(ex> #4575; )
    The code inserted DB. select from DB the tuple.
    and display the IE6. it's ok.
    but loaded from japanese html. It's inserted to DB '?'.
    and displayed '?' on IE6.
    I want to translate the character to code(ex> #5455;).
    OK?
    Please reply...
    p.s I'm sorry my poor english..

    Thank you for reply.
    but I've already tried that.
    I have tried all japanese encoding of "Supported Encodings" from java.
    http://java.sun.com/j2se/1.3/docs/guide/intl/encoding.doc.html
    I want to convert that.
    ex)
    ?fe? => & # 4532; fe & # 3456; => original charcater is removed blank
    This is convert '?' to code number.
    In this case '?' is Japanese character.
    Please let me know the way..

  • Character Encoding for IDOC to JMS scenario with foreign characters

    Dear Experts,
    The scenario is desribed as follows:
    Issue Description:
    There is an IDOC which is created after extracting data from different countries (but only one country at a time). So, for instance first time the data is picked in Greek and Latin and corresponding IDOC is created and sent to PI, the next time plain English and sent to PI and next Chinese and so on. As of now every time this IDOC reaches PI ,it comes with UTF-8 character encoding as seen in the IDOC XML.
    I am converting this IDOC XML into single string flat file (currently taking the default encoding UTF-8) and sending it to receiver JMS Queue (MQ Series). Now when this data is picked up from the end recepient from the corresponding queue in MQ Series, they see ? wherever there is a Greek/latin characters (may be because that should be having a different encoding like ISO-8859_7). This is causing issues at their end.
    My Understanding
    SAP system should trigger the IDOC with the right code page i.e if the IDOC is sent with Greek/Latin code page should be ISO-8859_7, if this same IDOC is sent with Chinese characters the corresponding code page else UTF-8 or default code page.
    Once this is sent correctly from SAP, Java Mapping should have to use the correct code page when righting the bytes to outputstream and then we would also need to set the right code page as JMS Header before putting the message in the JMS queue so that receiver can interpret it.
    Queries:
    1. Is my approach for the scenario correct, if not please guide me to the right approach.
    2. Does SAP support different code page being picked for the same IDOC based on different data set. If so how is it achieved.
    3. What is the JMS Header property to set the right code page. I think there should be some JMS Header defined by MQ Series for Character Encoding which I should be setting correctly) I find that there is a property to set the CCSID in JMS Receiver Adapter but that only refers to Non-ASCII names and doesn't refer to the payload content.
    I would appreciate if anybody can give me pointers on how to resolve this issue.
    Thanks,
    Pratik

    Hi Pratik,
         http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42?quicklink=index&overridelayout=true
    This link might help.
    regards
    Anupam

  • Character Encoding in XML

    Hello All,
    I am not clear about solving the problem.
    We have a Java application on NT that is supposed to communicate with the same application on MVS mainframe through XML.
    We have a character encoding for these XML commands we send for communication.
    The problem is, on MVS the parser is not understaning the US-ASCII character encoding. And so we are getting the infamous "illegal character error".
    The main frame file.encoding=CP1047 and
    NT's file.encoding = us-ascii.
    Is there any character encoding that is common to these two machines: mainframe and NT.
    IF it is Unicode, what is the correct notation for it.
    Or is there any way for specifying the parsers to which character encoding should be used.
    thanks,
    Sridhar

    On the mainframe end maybe something like-
    FileInputStream fris = new FileInputStream("C:\\whatever.xml");
    InputStreamReader is= new InputStreamReader(fris, "ASCII");//or maybe "us-ascii" "US-ASCII"
    BufferedReader brin = new BufferedReader(is);
    Or give inputstream/buffered reader to whatever application you are using to parse the xml. The input stream reader should allow you to set your encoding even if the system doesnt have the native encoding. Depends though on which/whose jvm using you are using jdk1.2 at least supports following on this page http://as400bks.rochester.ibm.com/pubs/html/as400/v4r4/ic2924/info/java/rzaha/javaapi/intl/encoding.doc.html

  • When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    When I load certain websites the the writing is all squashed up. I correct this by changing the character encoding setting. I am using the latest Apple Mac machine. Thanks in advance

    Thanks for that information!
    I'm sure I will be calling AppleCare, but the problem is, they charge for the phone calls don't they? Because I don't have money to be spending to be on the phone with a support service.
    In other things, it seemed like the only time my MacBook was working was when I had Snow Leopard without the 10.6.8 update download that was supposed to be done to prepare for OS X Lion.
    When I look at the information of my HD it says that I have 10.6.8 but that was the install that it claimed to have failed and caused me to restart resulting in all of the repeated problems.
    Also, because my computer is currently down, and I've lost all files how would that effect the use of my iPhone? Because if it doesn't get fixed by the time OS 5 is released, how would I be able to upgrade?!

  • Character Encoding in Input Processor

    hi,
    I am running Portal 4.0. I am using the <portlet:form> tag to create a
    formular. I send the data to an Input Processor. The IP grabs the parameter
    with request.getParameter("xyz"). Inside the IP the character encoding does
    not work. I need support for the german "umlaute". Theses characters are not
    displayed properly. Hoc can I set the character encoding inside of the IP?
    Regards
    Michael

    on the form processing page, you probably need to call request.setCharacterEncoding("UTF-8") (or whatever encoding you're using) before reading any values.
    Reply #14 of this post has some test page with Chinese which should work as is...
    http://forum.java.sun.com/thread.jspa?forumID=513&threadID=546863

  • Character Encoding in Crystal Reports

    Hi,
    We are using HTML character encoding to prevent SQL injection, i.e. when a special character like apostrophe (') is entered, it is saved as &#39;. I wanted to know if we could manipulate the data in between the two operations of fetching the data from the DB and displaying the data using Crystal?
    We could use formulae/functions in the design/query, but I am looking for a more generic solution using Crystal API. I have attached a screenshot of how Mc'Donald is displayed in the Report.
    Thanks for the help!
    Noopur

    All you need to do is format the field in CR to use HTML Text Interpretation. However note that CR does not support all HTML tags. For details see the following KBAs:
    1217084 - What are the supported HTML tags and attributes for the HTML Text Interpretation in Crystal Reports?
    1272676 - Not all the standard HTML tags are interpreted in Crystal Reports when using HTML Text Interpretation
    - Ludek
    Senior Support Engineer AGS Product Support, Global Support Center Canada
    Follow us on Twitter

  • Absolute UTL_SMTP Character encoding Frustration

    I am using UTL_SMTP to send emails from my DB, it works great when i am sending Standard ACSII emails, but when I try to send arabic emails (windows-1256) which corresponds to my Database character encoding NLS_LANG value ARABIC_SAUDI ARABIA.AR8MSWIN1256 all arabic characters appear as question marks in my email even though i set Content-transfer-encoding to windows-1256 as shown below.
    l_maicon :=utl_smtp.open_connection('domain',25);
    utl_smtp.helo(l_maicon,'domain');
    utl_smtp.mail(l_maicon,' [email protected]');
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.rcpt(l_maicon, [email protected]);
    utl_smtp.data(l_maicon,
    'Content-type:text/html; charset=windows-1256' || utl_tcp.crlf ||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || [email protected] || ';' || [email protected] || utl_tcp.crlf ||
    'Subject: Notification System: ' || utl_tcp.crlf ||
    'هذه الرسالة بالعربية');
    utl_smtp.quit(l_maicon);
    I am wondering if anyone has been across a similar problem.
    Thank you,
    Hussam Galal
    [email protected]

    I was misusing the function UTL_SMTP.write_raw_data() as i was not calling the functions UTL_SMTP.open_data() before and UTL_SMTP.close_data() after it.
    Apparantly the problem consists on two parts, first part is writing the data correctly(8-bit encoding) from the database package UTL_SMTP, this is done using the function write_raw_data and the second part is telling the party handling the email about the character encoding of the email this done by adding Content-Type header in the email header as follows :
    'Content-Type: text/plain; charset="windows-1256"'
    'Content-Transfer-Encoding: 8bit'
    When I write the message using write_raw_data function the To: From: and subject(in Arabic exactly as desired) are recieved but with no body, the body always appears empty(blank)!!
    When i send the exact same message using the function data() the body is recieved but with characters in arabic appearing as question marks which is expected as this function supports only standard ascii.
    When I use the function write_data() it gives exactly same result of using data(), which i couldnt understand as it's mentioned in the documentation that this function supports 8 bit character encoding!!!
    Below is the sample pieces of code which i am using
    WRITE_RAW_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.write_raw_data(l_maicon,
    UTL_RAW.cast_to_raw(
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                             x.notification_message));
    utl_smtp.close_data(l_maicon);
    Result: Subject is recieved with correct encoding but NO body.
    DATA()
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message );
    Result: Email is recieved but any Arabic characters appear as question marks.
    WRITE_DATA()
    utl_smtp.open_data(l_maicon);
    utl_smtp.data(l_maicon,
    'Content-Type: text/plain; charset="windows-1256"' || utl_tcp.crlf||
    'Content-Transfer-Encoding: 8bit' || utl_tcp.crlf||
    'From: [email protected]' || utl_tcp.crlf||
    'To: ' || x.admin_email || ';' || x.manager_email || utl_tcp.crlf ||
    'Subject: Notification System: '||x.content_description || utl_tcp.crlf ||
                                  x.notification_message);
    utl_smtp.close_data(l_maicon);
    Result: Email is recieved but any Arabic characters appear as question marks.
    Which function should i be using, if write_raw_data then when is their no body, and if WRITE_data why is it not supporting 8bit characters?

  • What character encoding standard does tuxedo use

    Hi,
    I am trying to resolve a problem with communication between Tuxedo 6.4 and Vitria.
    It seems that there is a problem with the translation of special characters. Does
    anyone know what encoding standard that Tuxedo uses?
    Thanks.

    Thanks Scott, actually I was asked the following question by Vitria Technical Support,
    can you help?
    "XDR (External Data Representation) is a protocol used by BEA Tuxedo's
    communication engine. XDR handles data format transformations when passing
    messages across dissimilar processor architectures.
    This is not the equivalent of Character Encoding. I specifically need the
    Character Encoding used. I am not sure where your admin needs to check for
    this - it might even be set at the OS level. I suspect that it will be
    something like ISO-8859-1 or some derivative."
    Thanks.
    Scott Orshan <[email protected]> wrote:
    Within a machine, TUXEDO just sends the bytes that you give it. When
    it
    goes between machines, it uses XDR to encode the data values for
    transmission. There is no character set translation going on, unless
    you
    are going to an EBCDIC machine. (If you are using data encryption
    [tpseal] in TUXEDO 7.1 or 8.0 your data may be encoded even if it stays
    on the same machine type.)
         Scott Orshan
         BEA Systems
    Richard Astill wrote:
    Hi,
    I am trying to resolve a problem with communication between Tuxedo6.4 and Vitria.
    It seems that there is a problem with the translation of special characters.Does
    anyone know what encoding standard that Tuxedo uses?
    Thanks.

  • Adding new character encoding to PBP

    is there any way to add new character encoding in PBP1.1..?
    .I want it to support japanese encoding

    1) Derive MyDateFormat from SimpleDateFormat only allowing the default constructor.
    2) Override the public void applyPattern(String pattern) method so that it detects the 'Q' and replaces it with some easily identifiable pattern involving the month (say �MM� ) and then call the superclass applyPattern method.
    3) Override the public StringBuffer format(Date date,
                StringBuffer toAppendTo,
                FieldPosition fieldPosition)
          method such that if first calls the superclass method to get the formatted output and then corrects this output by replacing (using regular expressions) the �01�, �02� etc with the appropriate quarter.
    You might do better to not to actually derive a new class from SimpleDateFormat but just create a class which uses SimpleDateFormat.

Maybe you are looking for

  • Creative MP3 Players (Especially ZEN Touch 2) durability: satisfied?

    Hello, I'm willing to buy a ZEN Touch 2, it look so amazing! However, what I would like to know more: durability! Is the ZEN Touch 2 (Or any other MP3 Player from Creative) durable enough? I'll still have the year warrantee, but I'll need to ship it

  • Windows 8.1 not able to reconnect after user accepts Gateway logon message

    I spent all morning troubleshooting a connection problem with a client. It turned out that the TS Gateway message prompted to users on first login was the culprit or rather the option to NOT display the message again. When "Do not ask again..." is se

  • Branch object wire without creating a copy

    Hello, I try to interact with a device. I abstracted the device into a class with several methods. My application is one big event frame taking care of user input from the front panel and converting it into method calls in order to control said devic

  • Error 4 when updating the text data

    When i am trying to load the data  using IP i am facing the issue Error 4 when updating the text data.Please suggest me Edited by: srikanth reddy612 on Dec 15, 2011 12:45 PM

  • Wlst bug ??

    I created a class in Python (Cluster) that has the properties name, and servers ( a list of Server classes). The server class has appropriate properties as well (name, port, etc.) when i loop through the class and call the create(_cluster.name, "Clus