HTML Encoding of output with special language characters ie. Umlaut

Greetings,
Application: Crystal Reports XI RDC Runtime engine.
Output format with error: HTML
Sample Data row passed as a recordset to crystal by MS SQL Server.
lahenduse tähtaja testimine;;High;Top;C;1;01/01/2010;05/04/2010;05/04/2010
HTML encoding:
lahenduse t?¤htaja testimine
When the output of PDF or Excel are selected charaters with umlauts, tilds, etcu2026 over the letters are displaying correctly.
When the output is HTML the encoding is messed up.
Has there been a fix for this runtime engine regarding this issue?
Regards,
Douglas Davidson
Edited by: DouglasD on May 4, 2010 5:38 PM

Ludek,
The following link http://resources.businessobjects.com/support/communityCS/FilesAndUpdates/crXI_rdc_merge_modules.zip would be the correct update pack link I am looking for.
http://resources.businessobjects.com/support/additional_downloads/runtime.asp
As it is the runtime engine, not the crystal developer that is having the problem.
For completeness I have upgraded my developer CR XI Release 1 to SP4.
http://resources.businessobjects.com/support/additional_downloads/service_packs/crystal_reports_en.asp#CRXIR1
I will post again later today, once all the upgrading has been completed.
Doug

Similar Messages

  • Reading a text file with foreign language characters

    I'm trying to convert foreign language characters to English looking characters.  I have code that works, but only if I hard code the string with foreign language characters and pass it to the function. I cannot figure out how to get my program to read
    in the foreign characters from my file, they come in as garbage. 
    Since the function works when I pass a hard coded string to it, I'm pretty sure the problem is the way I have the Streamreader set up, it's just not reading the characters correctly...
    Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
    Dim FileRdr As StreamReader = New StreamReader "m:\test\charReplace.txt", System.Text.Encoding.UTF7)
    Dim ReplaceWrtr As StreamWriter ReplaceWrtr = System.IO.File.CreateText("M:\test\CharReplaceOut.txt")
    Do While FileRdr.Peek() >= 0
    Dim currentRec As String = FileRdr.ReadLine
    removeAccent(currentRec)
    ReplaceWrtr.WriteLine(currentRec)
    Loop
    ReplaceWrtr.Close()
    End Sub
    'Replace foreign language characters with English characters
    Function removeAccent(ByVal myString As String)
    Dim A As String = "--"
    Dim B As String = "--"
    Const AccChars As String = "ŠŽšžŸÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖÙÚÛÜÝàáâãäåçèéêëìíîïðñòóôõöùúûüýÿ"
    Const RegChars As String = "SZszYAAAAAACEEEEIIIIDNOOOOOUUUUYaaaaaaceeeeiiiidnooooouuuuyy"
    For i As Integer = 1 To Len(AccChars)
    A = Mid(AccChars, i, 1)
    B = Mid(RegChars, i, 1)
    myString = Replace(myString, A, B)
    Next
    removeAccent = myString
    End Function
    I know that removing the accent changes the meaning of the word, but this is what the user wants so it's what I need to do. 
    Any help is greatly appreciated!! :)
    Thanks!
    Joni

    Finally got it to work.  I had to remove the first 5 characters from the replacement string (ŠŽšžŸ), couldn't find encoding that would handle these, and to be honest, I didn't really need them.  The important ones are still there, was probably
    just overkill on my part.
    UTF7 worked for the rest...
    Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
    Dim FileRdr As StreamReader = New StreamReader("m:\test\charReplace.txt", System.Text.Encoding.UTF7)
    Dim ReplaceWrtr As StreamWriter
    ReplaceWrtr = System.IO.File.CreateText("M:\test\CharReplaceOut.txt")
    Do While FileRdr.Peek() >= 0
    Dim currentRec As String = FileRdr.ReadLine
    removeAccent(currentRec)
    ReplaceWrtr.WriteLine(currentRec)
    Loop
    ReplaceWrtr.Close()
    End Sub
    'Replace foreign language characters with english characters
    Function removeAccent(ByRef myString As String)
    Dim A As String = "--"
    Dim B As String = "--"
    Const AccChars As String = "ÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖÙÚÛÜÝàáâãäåçèéêëìíîïðñòóôõöùúûüýÿ"
    Const RegChars As String = "AAAAAACEEEEIIIIDNOOOOOUUUUYaaaaaaceeeeiiiidnooooouuuuyy"
    For i As Integer = 1 To Len(AccChars)
    A = Mid(AccChars, i, 1)
    B = Mid(RegChars, i, 1)
    myString = Replace(myString, A, B)
    Next
    removeAccent = myString
    End Function
    Thanks for all your help!  Greatly appreciated :)
    -Joni

  • Problem figuring out the encoding for filenames with special characters

    I'm not sure if this is the right forum, but this does seem like an OS issue.
    I brought in a lot of mp3 and m3u files from a Windows machine to my new Mac. Some of the mp3 files have accented characters in their names, and these names appear in the m3u files. But if I add the m3u file to iTunes, it fails to recognize these names and so I lose all the mp3's with special characters in their names.
    I tried to fix this by grabbing the files name in Python, but that didn't work either!
    Here's an example: the file's name is "Voilà l'été.mp3"
    The m3u files says "Voil\xe0 l'\xe9t\xe9.mp3" -- this doesn't work.
    From os.listdir(), I get Voila\xcc\x80 l'e\xcc\x81te\xcc\x81.mp3", but sticking it in an m3u files doesn't work either. (Note that here the characters are encoded as unaccented letter + two byte code for the accent).
    When I try these strings from python, e.g. doing os.stat(), they both work; but iTunes doesn't understand any of them!
    I'd appreciate any hints on how to enter these names in the m3u file so that iTunes can read it. Thanks!

    I know nothing about "m3u" files and how iTunes interprets the file names in them, but if it is not a relative/absolute path problem, then how about just putting the raw file names (not the ones with backslash escape) in m3u file? For example, just put
    Voilà l'été.mp3
    in m3u?
    As for Unicode encoding, HFS+ file system uses the "decomposed form" for accented characters. This means, as you write, à is hex "61 cc 80" in UTF-8, i.e., "a + COMBINING GRAVE ACCENT". The pre-composed form is hex "c3 a0". But my experience is that in most cases both pre-composed and decomosed forms work at the user level (not at the lowest file system level).

  • Problem inserting text with special Hungarian characters into MySQL database

    When I insert text into my MySQL db the special Hungarian
    characters (ő,ű) they change into "?".
    When I check the
    <cfoutput>#FORM.special_character#</cfoutput> it gives
    me the correct text, things go wrong just when writing it into the
    db. My hosting provider said the following: "please try to
    evidently specify "latin2" charset with "latin2_hungarian_ci"
    collation when performing any operations with tables. It is
    supported by the server but not used by default." At my former
    hosting provider I had no such problem. Anyway how could I do what
    my hosting provider has suggested. I read a PHP related article
    that said use "SET NAMES latin2". How could I do such thing in
    ColdFusion? Any suggestion? Besides I've tried to use UTF8 and
    Latin2 character encoding both on my pages and in the db but with
    not much success.
    I've also read a French language message here in this forum
    that suggested to use:
    <cfscript>
    setEncoding("form", "utf-8");
    setEncoding("url", "utf-8");
    </cfscript>
    <cfcontent type="text/html; charset=utf-8">
    I' ve changed the utf-8 to latin2 and even to iso-8859-2 but
    didn't help.
    Thanks, Aron

    I read that it would be the most straightforward way to do
    everything in UTF-8 because it handles well special characters so
    I've tried to set up a simple testing environment. Besides I use CF
    MX7 and my hosting provider creates the dsn for me so I think the
    db driver is JDBC but not sure.
    1.) In Dreamweaver I created a page with UTF-8 encoding set
    the Unicode Normalization Form to "C" and checked the include
    unicode signature (BOM) checkbox. This created a page with the meta
    tag: <meta http-equiv="Content-Type" content="text/html;
    charset=utf-8" />. I've checked the HTTP header with an online
    utility at delorie.com and it gave me the following info:
    HTTP/1.1, Content-Type: text/html; charset=utf-8, Server:
    Microsoft-IIS/6.0
    2.) Then I put the following codes into the top of my page
    before everything:
    <cfprocessingdirective pageEncoding = "utf-8">
    <cfset setEncoding("URL", "utf-8")>
    <cfset setEncoding("FORM", "utf-8")>
    <cfcontent type="text/html; charset=utf-8">
    3.) I wrote some special Hungarian chars
    (<p>őű</p>) into the page and they displayed
    well all the time.
    4.) I've created a simple MySQL db (MySQL Community Edition
    5.0.27-community-nt) on my shared hosting server with phpMyAdmin
    with default charset of UTF-8 and choosing utf8_hungarian_ci as
    default collation. Then I creted a MyISAM table and the collation
    was automatically applied to my varchar field into wich I stored
    data with special chars. I've checked the properties of the MySQL
    server in MySQL-Front prog and found the following settings under
    the Variables tab: character_set_client: utf8,
    character_set_connection: utf8, character_set_database: latin1,
    character_set_results: utf8, character_set_server: latin1,
    character_set_system: utf8, collation_connection: utf8_general_ci,
    collation_database: latin1_swedish_ci, collation_server:
    latin1_swedish_ci.
    5.) I wrote a simple insert form into my page and tried it
    using both the content of the form field and a hardcoded string
    value and even tried to read back the value of the
    #FORM.special_char# variable. In each cases the special Hungarian
    chars changed to "q" or "p" letters.
    Can anybody see something wrong in the above mentioned or
    have an idea to test something else?
    I am thinking about to try this same page against a db on my
    other hosting providers MySQL server.
    Here is the to the form:
    http://209.85.117.174/pages/proba/chartest/utf8_1/form.cfm
    Thanks, Aron

  • Send purchase order via email (external send) with special Czech characters

    Hi all,
    I am sending a purchase order created with ME21N via email to the vendor using "external send".
    The mail is delivered without any problems, PO is attached to the mail as PDF file.
    Problem is that special Czech characters as "ž" or "u0161" are not displayed, a "#" (hash) appears instead.
    This problem occurs when language for PO output = EN.
    Tests with language = CS worked out fine, but the whole form incl. all texts are in Czech as well; so no valid solution since it needs to be in English.
    We checked SAPCONNECT configuration and raised note 665947; this is working properly.
    When displaying the PO (ME23N) special characters are shown correctly as well.
    Could you please let me know how to proceed with that issue?!
    Thanks.
    Florian

    Hi!
    No, it's not a Unicode system.
    It is maintained as:
    Tar.          Lang.        Lang.        Output Format                           Dev. type
    Format
    PDF     EN     English                                                     PDF1
    Using this option, character "ž" was not displayed correctly, but "Ú" was ok.
    All other Czech special characters are not tested so far.
    Thanks,
    Florian
    Edited by: S. SCE - Stock Mngmnt on Aug 14, 2008 10:19 AM

  • Importing WORD document with special regional characters in RoboHelp X5

    Hello,
    i have a problem, when im importing a *.doc document. The
    document is written in slovene and it contains special regional
    characters. Here is the deal:
    I was using RoboHelp 4.0 before and i had this same problem.
    What i did was, when i added a new topic (imported *.doc file) into
    existing project and the WebHelp was generated, in the web browser
    i clicked the last added topic and opened a source code, where i
    changed the charset from 1252 to 1250. That enabled the special
    characters to be viewed correctly. When i imported some additional
    topics and generated the WebHelp again, the program somehow "saved"
    the 1250 setting in the previous topics and the characters were
    correctly shown. I had to adjust 1250 only in the new topics, that
    were added before the last generate.
    When i try to do the same in RoboHelp X5, this doesnt work.
    Program doesnt "remember" the 1250 setting and it always generates
    with 1252 character setting. Which is a problem, because there are
    a lot of topics and i would have to change the character setting
    for every topic/doc document i added.
    What can i do ?
    Thanks in advance

    Hello Tiesto_ZT,
    Welcome to the Forum.
    I have no experience of using other languages in RH, but this
    problem was discussed in
    Thread.
    Check it out and post back if it doesn't fit your needs.
    Hope this helps (at least a bit),
    Brian

  • Printing Pro*C Outputs with Other Language Text and PCL

    Hi,
    Since I did not find any separate forum for Pro*C, I am posting my query in this forum.
    In our application, using Pro*C programs we are generating reports. These reports contain other language characters such as Arabic, Thai along with printer control characters (PCL). Earlier for the reports that contains only English characters, we have used C program to print the document and now we could not use the same C program because of the presence of other language characters in the output (when the old C program is used for printing, the other language characters are printed as junk).
    Can anyone update me about the options that are available in C or VB or any other third party printing program to print the documents generated in other language as well as with printer control characters.
    Thanks in advance.
    Best Regards,
    Ganesan

    Hey Thanks .
    Does that mean I will put 1900 as character set in that device type and assign it to printer and then print with that ?
    Best
    Ganesh

  • FDM - Errors with Foreign Language Characters in Load Files

    For our Netherlands data import file, our source account decriptions contain the character "ë". The account decription is not loaded to HFM, but we are receiving error messages when importing. Is there a way to ignore foreign language characters? Another post suggested "Administration", "Configuation Settings", "File Encoding Type" then set to "Unicode", but the version I am using does not have this option. Version is 11.1.1.3.
    Thank you!
    Melody

    Hello Melody,
    If you are using FDM v11.1.1.3.00 the file-encoding is defaulted to Unicode. It might be helpful to contact support.
    It could be your DB isn't setup to handle unicode characters (in this event the information would be stored in the SQL*Loader or MSSQL Logs). It could also be the file you are trying to import actually isn't "unicode" but more of a local ANSI/ASCII format.
    Most 'European' characters were supported in that release; and I know several clients that use it as expected.
    Thank you,

  • Problem with special national characters

    Hi,
    How can I turn on the Oracle Application Server 10g to correct expose special national characters (ANSI 1250 Central Europe page)?
    It hosted on Windows Server 2003 where are appropriate character resources.
    Thanks in advance
    KM

    Check the available languages in SMLT (trn). In example stated below the characters coming from DI are Spanish characters, which are gettnig converted to Swedish 1s.
    Please go through the following:
    Re: Japanese characters

  • Entering special language characters in JSF

    I am using JDeveloper 10.1.3.3 to develop a JSF/ADF application. My database is using the AL32UTF8 character set. If the following is entered into an af:inputText field....
    Podmiotowi w 2 okresie leczenia proszę
    ... when the record is saved it changes ...
    proszę to prosz + (what I guess is the unicode representation of the last character "&#281")
    .... and this is what is being stored in the database. This would be ok, but when the record is displayed again in the browser, it is not resolving the unicode back to the correct character.
    How do I allow the entry of these special characters and have them correctly displayed on the web page after saving ? I also have a similar problem with Hungarian and Polish.
    Many thanks,
    Brent Harlow

    Hi Shay,
    Thanks for that. I had set the compiler project properties and the locale of the browser but I was missing the encoding within the JSF page.
    Once I changed
    <jsp:directive.page contentType="text/html;charset=windows-1252"/>
    and
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
    to use "text/html;charset=UTF-8", it worked a treat.
    Many thanks for the help !
    Cheers,
    Brent

  • I can't type text with special Spanish characters (Captivate 6)

    Hi there,
    I've just downloaded Adobe Captivate 6 to try it out and found that I can't type some special characters widely used in Spanish language: our accents or tildes (that's the Spanish word for them). That's what I can't type:
    Á or á
    É or é
    Í or í
    Ó or ó
    Ú or ú
    I've checked the fonts on my font manager, but that's not the problem. They're fine.
    I've also tried to paste some text with these characters from Firefox and it worked fine. You can see it on the image:
    So I guess it must be some annoying bug.
    Can anyone help me?
    Thanks in advance.
    Best regards,
    Iñaki

    Hi Lilybri,
    I still have this problem with the accents in AC and I've just found that problem comes from the double-type characters (like the accents in Spanish).
    http://helpx.adobe.com/captivate/kb/double-byte-characters-cannot-input.html
    BTW, I got no answer from Adobe and i'ts been more than 2 months since I registered the bug...
    Best regards!

  • Supporting Special Language Characters

    If you have had difficulty composing emails in languages other than English, please know that Verizon is working to quickly address the issue in both Message Center and the Rich Version of Classic Webmail.
    Additionally, you may want to confirm that your Operating System is configured to support your preferred language. You can do so by following the steps below (for Windows based Operating Systems):
    1. Start menu button
    2. Control Panel
    3. Regional & Language Options
    4. Select the language of your choice
    Thank you,
    Lorena

    I figured out the problem, and I never would've guessed it. The problem seems to be with qingy. Whenever I start X from qingy with .xsession, special characters do no work in urxvt. If I launch it from normal agetty I can use all special characters in urxvt. I never would've thought qingy was the problem, but is there a way to fix this? Or should this be considered as some kind of bug? As I'd like to continue to use qingy, but only if I can fix this problem.
    EDIT: I should mention that if I launch Console from qingy and then run startx everything works. The only difference I know of is that startx uses .xinitrc, while qingy starts X using .xsession. Both files are identical, so I don't know what's causing this.
    Last edited by sablabra (2009-01-02 01:08:21)

  • Special language characters in Urxvt

    I can't make special characters for my language (norwegian characters, to be specific) display in urxvt terminal. Nothing happens when I try to type them. This works perfectly in xterm. Anyone got any advice? I went through the urxvt man, but I couldn't spot anything relevant.

    I figured out the problem, and I never would've guessed it. The problem seems to be with qingy. Whenever I start X from qingy with .xsession, special characters do no work in urxvt. If I launch it from normal agetty I can use all special characters in urxvt. I never would've thought qingy was the problem, but is there a way to fix this? Or should this be considered as some kind of bug? As I'd like to continue to use qingy, but only if I can fix this problem.
    EDIT: I should mention that if I launch Console from qingy and then run startx everything works. The only difference I know of is that startx uses .xinitrc, while qingy starts X using .xsession. Both files are identical, so I don't know what's causing this.
    Last edited by sablabra (2009-01-02 01:08:21)

  • Problem with special extra characters

    Hi Experts,
    I´m working in SAP BW 7.0 using Data Integrator XI 3.1 with SAP BW as a target and BO DI as a source and we have some problems with the load process
    I have problems with the conversion of special characters that exist in DI and in BI arrive as an entirely different character, for example, Ñ is replaced with Å , and so on witn the á, é, í, ó, ú, etc
    In SAP BI these characters were included in the T-code RSKC (Maintenance of the permitted extra characters in BW) as special characters. In addition I tried to change the character set settings in order to use the 1100 SAP internal, like ISO 8859-1 in the infopackage but it doesn't works neither.
    The character encoding used in DI by default is utf-8.
    Can anyone hel me to solve this issue?
    Thanks in advanced.

    This worked in BW 3.5:
    Use function module RSKC_CHAVL_OF_IOBJ_CHECK to covert Special characteristics into Latin ones.
    John Hawk

  • Problems with Polish Language Characters

    I'm using version 5.5 of Output Designer.
    I've included Arial CE as a softfont in designer.  I can successfully display polish characters in a text field on my template, for example Wysyłka.
    When I run a Test Presentment, the text fields from my template still display properly in the resulting PDF file.
    However, the polish characters from the merged data in an fnf file get substituted with other characters.
    For example, in the fnf file I have
    ^Field zloty
    dwadzie
     ścia dziewięć tysięcy dziewięćset pięćdziesiąt siedem
    which becomes
    dwadzie               oecia dziewiêæ tysiêcy dziewiêæset piêædziesi¹t siedem
    in the PDF. 
    I have the field set to Arial CE font.  I've also tried running the Presentment with my Windows set to Polish but it didn't help.
    Any suggestions would be appreciated.
    Lynne
    Message was edited by: LynneWilson
    (Sorry....the example Polish text lines should all be on just two lines, but everytime I post it gets changed!)

    Unfortuantely when it comes to fonts and such, I am more of a Mac guy in that regard and have never had to deal with font issues on any of the Windows machines I have use or currently use. Perhaps a Windows guru will come around soon and assist?
    I know that when I put in characters that my Mac displays properly and then take that to a PC I often get squares, like the ones below in my system info. On my Mac those show up as little Apple logos. On my work PC they are just squares. Which is why I suspect a font issue.
    Patrick

Maybe you are looking for

  • Content conversion Vs  Module (MessageTransformBean) - help?

    Hi All, Scenario- File Sender, reading file (header,data,*,trailer) when should we use standard content conversion, Module (MessageTransformBean) ? I need to generate complex XML structure from flat-file records. Source (using some key value to ident

  • IPod 4th Gen will not charge or power up

    My poor little pink gen 4 iPod shuffle has gone kaput on me this morning.  I was given a low battery warning yesterday morning but forgot to charge it up.  I have gone to charge it today and and nothing is happening.  The iPod gets warm but the power

  • HELP  my Powerbook is password protected

    i got my powerbook second hand and it has a password so any help would be helpful

  • ESS personal information Nullpointer exception

    Hi, When I try to save data in any of the ESS Personal Information (like Personal data, addreses, etc) I get a nullpointerexception while saving. The data is getting saved but the navigation is not going to the next page. Any ideas? Im on Portal 7.0

  • Lob over ftp/http

    Hi! I've got a problem: I have a table with lobs and urls and i want to send these lobs (over http/ftp) when user trys to retrive the file with given url. I'm using Oracle 8 R2 with OAS. Thanks in advance