IReport and non latin chars

When printing through iReport with russian chars, it prints corrent but when converting the same file to PDF or HTML etc., it converts only latin chars but not russian.
Does anyone knows what is wrong?

i think this is not the right forum to ask such questions...
you should ask from the producers of IReport (this is not their website!)
and when you go asking that, then also provide more information about your problem, for example which OS you're running that product on etc.
maybe it's some problem with lack of font support... maybe it's problem with your PDF viewer, i don't know... but i know whose fault that is noh, it is not my fault!

Similar Messages

  • KDE (or GNOME?) non-latin chars input problem

    After recent upgrade (pacman -Syu) can't anymore type non-latin characters in GNOME apps (like gEdit, Evolution, Thunderbird etc.) under KDE. Openoffice is still OK.
    Keyboard layout switching works (can see the difference between US and GB layouts, but when switching to non-latin layouts getting latin chars instead of non-latin.
    In KDE apps everything is fine.
    Have current KDE and GNOME installed.
    BTW, when trying to run Epiphany web browser (under KDE again) getting error message (not sure but maybe this is related to the first problem):
    "Could not start GNOME Web Browser
    Startup failed because of the following error:
    Unable to determine the address of the message bus (try 'man
    dbus-launch' and 'man dbus-daemon' for help)"
    Have dbus in DAEMONS line in rc.conf.
    Any suggestions?

    Fixed first problem... rollback of inputproto and libxi did the trick (not sure how safe it was).
    scarecrow, thank you for your reply but it didn't help for the second problem. Changed DAEMONS to hal only, added dbus-python and even dbus-sharp (just in case, all other dbus related stuff was already there) but - no luck. Perhaps dbus-1.0 might be a solution who knows.
    Thanks anyway.

  • IPhone: sectionIndexTitlesForTableView and non-ASCII chars

    You specify an array of strings for your section titles in your implementation of UITableViewDataSource method sectionIndexTitlesForTableView. However, it seems like if you have a string with non-ASCII characters, it is left blank (for example the Korean string "누리마루"). Anybody else encounter this issue?

    I bump Dr. Chucks thread for a similiar problem with non-ascii chars.
    The chars show up but the sorting is a bit off.
    Example: A, Å, B, ... Z
    Should be: A, B, ... Z, Å, Ä, Ö
    In Swedish Å (the letter Aring;) is one of the last letters and should not be placed after A despite being similiar.
    Any ideas?

  • FILE_DATASTORE and non-ASCII chars

    I have created an interMedia Text index
    with the FILE_DATASTORE option, so that
    interMedia treats table entries as
    filenames and indexes the corresponding
    file on the servers's filesystem.
    But whenever the filename contains characters
    which are not part of the US7ASCII charset (like dv| _), the file is not found. But both Oracle and the operating system support these characters.
    The Oracle instance uses UTF8 as internal
    characterset. The client which stores
    the filenames in the table uses the
    WE8ISO8859P1 charset. The values in the
    database table are stored and shown correctly
    when viewed with Oracle or Java client
    programs.
    So where does the charset conversion fail ?
    The names are stored correctly, they can be
    read correctly by clients, but the indexer
    seems to use a wrong charset to translate
    the filenames stored in the database into
    filenames of the operating system.
    Do I have to apply some additonal configurations to my indexer ?
    Greetings,
    Michael Skusa
    null

    I bump Dr. Chucks thread for a similiar problem with non-ascii chars.
    The chars show up but the sorting is a bit off.
    Example: A, Å, B, ... Z
    Should be: A, B, ... Z, Å, Ä, Ö
    In Swedish Å (the letter Aring;) is one of the last letters and should not be placed after A despite being similiar.
    Any ideas?

  • Regex with strings that contain non-latin chars

    I am having difficulty with a regex when testing for words that contain non-latin characters (specifcally Japanese, I haven't tested other scripts).
    My code:
    keyword = StringUtil.trim(keyword);
    //if(keywords.indexOf(keyword) == -1)
    regex = new RegExp("\\b"+keyword+"\\s*;","i");
    if(!regex.test(keywords))
    {Alert.show('"'+keywords+'" does not contain "'+keyword+'"'); keywords += keyword + "; ";}
    Where keyword is
    日本国
    and keywords is
    Chion-in; 知恩院; Lily Pond; Bridge; 納骨堂; Nōkotsu-dō; Asia; Japan; 日本国; Nihon-koku; Kansai region; 関西地方; Kansai-chihō; Kyoto Prefecture; 京都府; Kyōto-fu; Kyoto; Higashiyama-ku; 東山区; Places;
    When the function is run, it will alert that keywords does not contain keyword, even though it does:
    "Chion-in; 知恩院; Lily Pond; Bridge; 納骨堂; Nōkotsu-dō; Asia; Japan; 日本国; Nihon-koku; Kansai region; 関西地方; Kansai-chihō; Kyoto Prefecture; 京都府; Kyōto-fu; Kyoto; Higashiyama-ku; 東山区; Places; " does not contain "日本国"
    Previously I was using indexOf, which doesn't have this problem, but I can't use that since it doesn't match the whole word.
    Is this a problem with my regex, is there a modifier I need to add to enable unicode support or something?
    Thanks
    Dave

    ogre11 wrote:
    > I need to use refind to deal with strings containing
    accented characters like
    > ?itt? l?su, but it doesn't seem to find them. Also when
    using it with cyrillic
    > characters , it won't find individual characters, but if
    I test for [\w] it'll
    > work.
    works fine for me using unicode data:
    <cfprocessingdirective pageencoding="utf-8">
    <cfscript>
    t="Tá mé in ann gloine a ithe;
    Nà chuireann sé isteach nó amach
    orm";
    s="á";
    writeoutput("search:=#t#<br>for:=#s#<br>found
    at:=#reFind(s,t,1,false)#");
    </cfscript>
    what's the encoding for your data?

  • Reading .txt file and non-english chars

    i added .txt files to my app for translations of text messages
    the problem is when i read the translations, non-english characters are read wrong on my Nokia. In Sun Wireless Toolkit it works.
    See the trouble is because I don't even know what is expected by phone...
    UTF-8, ISO Latin 2 or Windows CP1250?
    im using CLDC1.0 and MIDP1.0
    What's the rigth way to do it?
    here's what i have...
    String locale =System.getProperty("microedition.locale");
    String language = locale.substring(0,2);
    String localefile="lang/"+language+".txt";
    InputStream r= getClass().getResourceAsStream("/lang/"+language+".txt");
    byte[] filetext=new byte[2000];
    int len = 0;
    try {
    len=r.read(filetext);
    then i get translation by
    value = new String(filetext,start, i-start).trim();

    Not sure what the issue is with the runtime. How are you outputing the file and accessing the lists? Here is a more complete sample:
    public class Foo {
         final private List colons = new ArrayList();
         final private List nonColons = new ArrayList();
         static final public void main(final String[] args)
              throws Throwable {
              Foo foo = new Foo();
              foo.input();
              foo.output();
         private void input()
              throws IOException {
             BufferedReader reader = new BufferedReader(new FileReader("/temp/foo.txt"));
             String line = reader.readLine();
             while (line != null) {
                 List target = line.indexOf(":") >= 0 ? colons : nonColons;
                 target.add(line);
                 line = reader.readLine();
             reader.close();
         private void output() {
              System.out.println("Colons:");
              Iterator itorColons = colons.iterator();
              while (itorColons.hasNext()) {
                   String current = (String) itorColons.next();
                   System.out.println(current);
              System.out.println("Non-Colons");
              Iterator itorNonColons = nonColons.iterator();
              while (itorNonColons.hasNext()) {
                   String current = (String) itorNonColons.next();
                   System.out.println(current);
    }The output generated is:
    Colons:
    a:b
    b:c
    Non-Colons
    a
    b
    c
    My guess is that you are iterating through your lists incorrectly. But glad I could help.
    - Saish

  • GetFirstFile() and non-english chars

    Hi,
    When using GetFirstFile() / GetNextFile(), if a file is encountered with Chinese chars in it's filename, each of these chars is replaced with a "?".
    As a result, I cant open the file as I dont know its full name.
    Does anyone know of a way around this? Some Windows SDK function maybe?
    cheers,
    Darrin.

    Hi Diz@work,
    Thanks for posting on the NI Discusson Forums. 
    I have a couple questions for you in order to troubleshoot this issue:
    Which language is your Windows operating system set to? Chinese or English?
    When you say that the filename returned contains '?' characters instead of the Chinese characters, do you mean you see this when you output to a message popup panel or print to the console? Are you looking at the values in fileName as you're debugging? Can you take a look at the actual numerical values in the fileName array and see which characters they map to? It's possible that the Chinese characters are being returned correctly, but the function you're using to output them doesn't understand the codes they use.
    Which function are you using to open the file with the fileName you get from GetFirstFile()? Can you take a look at what's being passed to it?
    CVI does include support for multi-byte characters. Take a look at this introduction:
    http://zone.ni.com/reference/en-XX/help/370051V-01/cvi/programmerref/programmingmultibytechars/
    As far as the Windows SDK goes, I did find that the GetFirstFile() and GetNextFile() functions are based on the Windows functions, FindFirstFile() and FindNextFile(). According to MSDN, these functions are capable of handling Unicode characters as well as ASCII:
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418(v=vs.85).aspx
    There may be a discrepancy between how these functions are being called and/or what they're returning to the CVI wrapper functions.
    Frank L.
    Software Product Manager
    National Instruments

  • Problem with Freehand eps and non english char

    HI, I'm new in this forum and newbie in Freehand
    I'm trying to make a eps file with Freehand. This eps is for faxcover, I make it and works fine, but the problem is when I want to use a special charcater, non englis, for example Ñ.
    Somebody knows how can I do to when I make the eps in Freehand permit special characters?
    Any idea?
    Thanks

    Special characters can't be done as FreeHand doesn't support unicode (extended letters with accents, etc)
    This may help: http://freefreehand.org/forum/viewtopic.php?f=5&t=268

  • BUG?? UTF-8 non-Latin database chars in IR csv export file not export right

    Hello,
    i have this issue: my database character set is UTF-8 (AL32UTF8) and contains data in a table used in IR that are Greek (non-Latin). While i can see them displayed correctly in IR and also via select / in Object Browser in SQL Workshop when i try to Download as csv the produced csv does not have the Greek characters exported correctly, while the Latin ones are ok.
    This problem is the same if i try IE or Firefox. Also the export in HTML works successfully and i see the Greek characters there correctly!
    Is there any issue with UTF-8 and non-Latin characters in export to csv from IRs ? Can someone confirm this, or has a similar export problem with UTF-8 DB and non-Latin characters ?
    How could i solve this issue ?
    TIA

    Hello Joel,
    thanks for taking the time to answer to my Issue. Well this does not work for my case as the source of data (Database character set) is UTF-8. The Data inside the database that are shown in the IR on the Screen is UTF-8 and this is done correctly. You can see this in my example. The actual Data in the Database are from multiple languages, English, Greek, German, Bulgarian etc that's why i selected the UTF-8 character set when implementing the Database and this requirement was for all character data. Also the suggested character set from Oracle is Unicode when you create a Database and you have to support data from multiple languages.
    What is the requirement, is that what i see in the IR (i mean in Display) i need to export in CSV file correctly and this is what i expect from the Download as CSV feature to achieve. I understand that you had in mind Excel when implementing this feature but a CSV is just an easy way to export the Data - a Comma Separated Values file, not necessarily to open them directly in Excel. Also i want to add here that in Excel you can import the Data in UTF-8 encoding when importing from CSV, which is fine for my customer. Also Excel 2008 and later understands a UTF-8 CSV file if you have placed the UTF-8 BOM character at the start of the file (well, it drops you to the wizzard, but it's almost the same as importing).
    Since the feature you describe and if i understood correctly is creating always an ANSI encoded file in every case, even when the Database character set is UTF-8, it is impossible to export correctly if i have data that are neither in Latin, not in the other 128 country specific characters i choose in Globalization attributes and these data is that i see in Display and need to export to CSV. I believe that this feature in case the Database character set is UTF-8 should create a CSV file that is UTF-8 encoded and export correctly what i see i the screen and i suspect that others would also expect this behaviour. Or at least you can allow/implement(?) this behaviour when Automatic CSV encoding is set to No. But i stongly believe - and especially from the eyes of a user - to have different things in screen and in the depicted CSV file is a bug, not a feature.
    I would like to have comments on this from other people here too.
    Dionyssis

  • Quastion marks in non-latin charecters when searching with google

    Hello,
    After installing firefox I encountered a problem.
    I am on the hebrew version of the browser, updated to the latest (33.1) version.
    When searching in the search bar with google using non-latin chars (hebrew) I am taken to google search page but instead of the search term I am getting quastion marks symbols (???????) I then need to again write the search term (which is now ok)
    This doesn't happen in DuckDuckGo or Yahoo search.
    A solution would be appreciated (and also important for other users)
    Thank you

    I can't seem to duplicate this.  Are you typing in the same place I am? I see I only have 6.0.1...

  • UploadedFile and filenames with non-ascii chars

    Hi
    I'm using an UploadedFile object in my web app, and all works fine. However, when I try to upload a file, with a filename containing non-ascii chars (e.g. Spanish), I see that the getBytes method returns an empty byte array, the filename is not stored correctly (the non-ascii chars are lost, replaced by another representation), and that the content-type is application/octet-stream instead of image/png as supposed to be.
    If I rename that same file to have only ascii chars - everything is back to normal.
    How can I upload files with non-ascii chars in their name?

    Hi, back! Spent a few hours experimenting and found
    that everything is working great (including the creation
    of international non-ASCII foldernames) when I used
    utf-8 encoding in the sieve filters rules for the
    the match strings and the folder names... at least
    so far so good... for your ref and sorry for bothering.

  • Non latin character sets and accented latin character with refind

    I need to use refind to deal with strings containing accented
    characters like žittâ lísu, but it doesn't seem to
    find them. Also when using it with cyrillic characters , it won't
    find individual characters, but if I test for [\w] it'll work.
    I found a livedocs that says cf uses the Java unicode
    standard for characters. Is it possible to use refind with non
    latin characters or accented characters or do I have to write my
    own Java?

    ogre11 wrote:
    > I need to use refind to deal with strings containing
    accented characters like
    > ?itt? l?su, but it doesn't seem to find them. Also when
    using it with cyrillic
    > characters , it won't find individual characters, but if
    I test for [\w] it'll
    > work.
    works fine for me using unicode data:
    <cfprocessingdirective pageencoding="utf-8">
    <cfscript>
    t="Tá mé in ann gloine a ithe;
    Nà chuireann sé isteach nó amach
    orm";
    s="á";
    writeoutput("search:=#t#<br>for:=#s#<br>found
    at:=#reFind(s,t,1,false)#");
    </cfscript>
    what's the encoding for your data?

  • Cannot create file with Non-latin characters- I/O

    I'm trying to create a file w/ Greek (or any other non-latin) characters ... for use in a RegEx demo.
    I can't seem to create the characters. I'm thinking I'm doing something wrong w/ IO.
    The code follows. Any insight would be appreciated. - Thanks
    import java.util.regex.*;
    import java.io.*;
    public class GreekChars{
         public static void main(String [ ] args ) throws Exception{
              int c;
              createInputFile();
    //          String input = new BufferedReader(new FileReader("GreekChars.txt")).readLine();
    //          System.out.println(input);
              FileReader fr = new FileReader("GreekChars.txt");
              while( (c = fr.read()) != -1)
                   System.out.println( (char)c  );
         public static void createInputFile() throws Exception {
              PrintStream ps = new PrintStream(new FileOutputStream("GreekChars.txt"));
              ps.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              System.out.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              ps.flush();
              ps.close();
              FileWriter fw = new FileWriter("GreekChars.txt");
              fw.write("\u03A9\u0398\u03A0\u03A3",0,4);
              fw.flush();
              fw.close();
    // using a printstream to create file ... and BufferedReader to read
    C:> java GreekChars
    // using a Filewriter to create files  .. and FileReader to read
    C:> java GreekChars
    */

    Construct your file writer using a unicode format. If
    you don't then the file is written using the platform
    "default" format -probably ascii.
    example:
    FileWriter fw = new FileWriter("GreekChars.txt",
    "UTF-8");I don't know what version of FileWriter you are using, but not that I know of take two string parameters. You should try checking the API before trying to help someone, instead of just making things up.
    To the OP:
    The proper way to produce a file in UTF-8 format would be this:
    OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("filename"), "UTF-8");Then to read the file, you would use:
    InputStreamReader reader = new InputStreamReader(new FileInputStream("filename"), "UTF-8");

  • [Solved]Ctrl+V not working in libreoffice on gnome(non-latin layout)

    When using gnome-shell&libreoffice or calligra, universal keyboard shortcuts such as ctrl+v or ctrl+z don't do anything while on a non-latin layout(hebrew in my case).
    If I want to paste something with ctrl+v I have to change layout to English and only then will it work.
    Under MATE the shortcuts work fine regardless of the layout in both applications(and all others i have tried).
    Under GNOME Shell all other applications I tried accept the shortcuts regardless of the layout(Firefox, gedit, Empathy, Nautilus)
    Anyone has an idea as to what might cause this behavior when using GNOME?
    Thanks.
    EDIT: Solved for libreoffice by removing the package "libreoffice-gnome". UI is not as pretty now but at least the keyboard shortcuts work.
    Last edited by shimi (2013-08-09 09:00:50)

    After months of switching layouts and banging my head against this bug, I thought I should check LibreOffice settings (I'm using 4.1.5.3 now). What figures? I did find something. And in just a few clicks.
    This is not a bug! It's simply a matter of configuration.
    For the regular keyboard shortcuts (like Ctrl+C, Ctrl+V, etc.) to remain operational in LibreOffice applications while using a non-latin keyboard layout (like Greek or Russian), go to Tools -> Options -> Language Settings -> Languages, check the Ignore system input language option, save, and Bob's your uncle.
    Hope this helps.
    Cheers!
    PS
    Technically, though, shortcuts still remain language-dependent. This means if you enable this option, you will have to set your document languages manually.

  • How to send non-latin unicode characters from Flex application to a web service?

    Hi,
    I am creating an XML containing data entered by user into a TextInput. The XML is sent then to HTTPService.
    I've tried this
    var xml : XML = <title>{_title}</title>;
    and this
    var xml : XML = new XML("<title>" + _title + "</title>");
    _title variable is filled with string taken from TextInput.
    When user enters non-latin characters (e.g. russian) the web service responds that XML contains characters that are not UTF-8.
    I run a sniffer and found that non-printable characters are sent to the web service like
    <title>����</title>
    How can I encode non-latin characters to UTF-8?
    I have an idea to use ByteArray and pair of functions readMultiByte / writeMultiByte (to write in current character set and read UTF-8) but I need to determine the current character set Flex (or TextInput) is using.
    Can anyone help convert the characters?
    Thanks in advance,
    best regards,
    Sergey.

    Found tha answer myself: set System.useCodePage to false

Maybe you are looking for

  • Skype 7.0 memory could not be read error on shutdo...

    Hello, yestarday I upgraded to Skype 7.0.0.100 and when I shutdown the computer, restart or log off I get this error message: The instruction at 0x60afedfe referenced memory at 0x069b26a0. The memory could not be read. I mention that I unchecked the

  • Problem with Free of Charge Sale Returns against Cost Center

    Dear All, The problem is related to Free Sales which is as follows - I have issued free of charge against Cost Center & now I want to know whether any process is there or configurations can be done to return the free of charge issued goods along with

  • CUIC 9.1(1) Custom Report

    Good afternoon.  I am checking to see if what I am trying to do is possible before attempting to do it.  I am new to custom reports and learning as I go.  We recently switched from an Avaya System to a Cisco System and lost some of our previous repor

  • Return code 0208

    Hi all, I am getting the following error when i am trying to transport a request from my DEV to PRD sys(as ours is a 2 sys landscape) Test call of transport control program (tp) ended with return code 0208 Message no. TK094 Diagnosis Your transport r

  • Need help for GOS

    hi, i have to implement GOS for HR Infotype 2001 (absences). i don't know if this is even possible. there is a GOS for employee (BUS1065),. but not for a specific infotype. any ideas ? best reg, Martin