Problem with Non-English Fields Output to PDF by JASPER in JDev10.1.3

I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
I think this should be a common problem. These are the components I am using:
itext-1.4.7. jar
commons-digester- 1.7.zip
jasperreports- 1.2.8.jar
Any comment or help is appreciated.
Thanks
Farbod

I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
I think this should be a common problem. These are the components I am using:
itext-1.4.7. jar
commons-digester- 1.7.zip
jasperreports- 1.2.8.jar
Any comment or help is appreciated.
Thanks
Farbod

Similar Messages

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Problem with non english caracter

    Hi,
    I'm using JRockit 1.5.0_03, I have a problem with pages with non english caracters. is it possible to change certain properties of JVM like "user.country", "file.encoding" or "user.language"? If yes, how can I change it?
    Thanks in advanced

    Hi,
    I'm using JRockit 1.5.0_03, I have a problem with pages with non english caracters. is it possible to change certain properties of JVM like "user.country", "file.encoding" or "user.language"? If yes, how can I change it?
    Thanks in advanced

  • [AS] Problem with non English characters in file path

    I wrote a script that exports a pdf file from ID, rasterizes it in PS, applies an action, saves it as another pdf file, and finally creates a Mail message, and attaches the file to it (the last part is written in AppleScript).
    The problem is that it doesn't work when the path to this file contains non English characters.
    This works:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Tetard/Test.pdf"}
    but this doesn't:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Têtard /Test.pdf"}
    I remember vaguely that I read somewhere that AppleScript can work with Unicode — in other words with such characters — starting from some version, don't remember which exactly, but it seems to me — Leopard.
    I am on Mac OS X 10.4.11 right now. Will updating solve this problem? Does anybody know any solution to this problem: a scripting addition, some hidden setting, etc.
    I made a little test: used a Russian character — ё and it works, but when I use — ê (Dutch) it doesn't. May it have something to do with the Region setting in International panel?
    Thanks in advance,
    Kasyan

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • Safari Problem with Non-English websites

    Hello every body,
    I ran into strange problem, I do not know why I can not browse corectly some non-english websites like کانادا مهاجرت  or مهاجرت کانادا or مهاجرت به کانادا . it sounds a messy, could I make a change in HTML codes to make it compatible with SAFARI?
    Thank you

    Taleo's job search is flat out broken on Safari. Clicking the "search" button does absolutely nothing. The latest Firefox is also broken.
    The only conclusion I can draw is that Taleo doesn't support Mac users, period. Stunningly stupid. Says a lot about the quality of Taleo's products and workers.

  • Problem with non currency field calculations to become curr

    Hi guys,
    Is there a problem if I have a QUAN field and DEC field forming to become a CURR field? I mean a have this computation below:
    v_var1 = v_var2 * v_var3.
    where v_var1 type QUAN, v_var2 type DEC and v_var3 type CURR...
    would it incur any problem with the calculations?
    Thanks!

    Hi,
    Did you try ?
    Worked for me flawlessly
    tables bseg.
    parameters : qty like bseg-menge,
                       amt like bseg-dmbtr.
    data : result like bseg-dmbtr.
    result = qty * amt.
                write result.
    The only issue is that the result will be rounded upto 2 decimals.
    But if you declare result as
    data : result(13) type p decimals 3.
    Then there will be no issues.
    regards,
    Advait
    Edited by: Advait Gode on Oct 3, 2008 3:59 PM

  • Problem with Buttons/Form Fields in Acrobat PDF when created in InDesign CS4

    So I have been working for quite some time on an interactive form for an internal client at my job. It has two major components, one is the side bar navigation, which I created using buttons in InDesign which link to bookmarks in the document. The second aspect is that I also created "check boxes" accoring to this video: http://www.adobe.com/designcenter/cs4/articles/lrvid4434_ds.html
    I have two major issues:
    1. Whenever I try to convert my boxes to form fields in Acrobat it will not recognize them unless I save out 4 pages at a time (rather than the whole document).
    2. I then have to combine the document and get multiple sets of bookmarks, and some bookamrks are corrupted (not working properly). Even when I try to convert the whole document some of the buttons are corrupted.
    UGH tired of this project, I am really bastardizing the whole thing to get it done, but want to figure out the problem as this is an ongoing document that we will be using over and over again in future years.
    I am attaching the pdf file--this is a pieced together one (saved out 4 pages at a time) (which I did do some fixes to to get buttons to go to proper locations) and the original indesign file (partial file due to size).
    PLEASE OH PLEASE HELP ME!

    Acrobat's Form Wizard is good but not perfect when creating form data from a PDF. I have never encountered Acrobat not able to finish the process of creating all form fields, but have encountered very slugish performance afterwards when there was a lot of forms, buttons, and other elements within the pages. And I would say, from your attached PDF, it is pretty heavy with content. I am going to suggest an idea, but I have not tested this to see if it would work.
    In InDesign, create a layer for your navigation tabs, and a layer for your text with the check boxes. Create two PDFs from ID, one PDF with just your-nav tabs, the second with just your text with check boxes. In Acrobat, perform the Form Wizard on the second PDF and see if this will finish, the idea here is to reduce the cllutter for the Form Wizard to finish. If it does, you can merge the first PDF as a layer with the second to have a completed document. Again, I have not tested this theory.

  • Problems with non-English characters in iTunes 7.7

    Since "upgrading" to iTunes 7.7 every file containing characters not normally found in English (e.g.: å, ß, ç, é, î, ñ, ø, ü, etc.) gets a mane change when iTunes plays the track. For example, a song called "Baião" suddenly becomes "Bai.o" and if the artist or album name contains accented characters, these get scrambled as well. The files themselves also get jettisoned from their folders in the iTunes Library, which causes big problems.

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • Problem with conversion of Vc output into PDF format

    hi all,
    In r/3 i created a RFC to get a company detail list.
    I am calling this RFC into iview.
    outputs of the iview are a char info field called 'REPTNAME' and a table which displays the list.
    i have created a button called 'PDF' on a table .
    for this button i have written a code like
    "pcd!portal_content!com.sap.gm.cnt!platform-add_ons!com.sap.ip.bi!iview u1!?QUERY="&STORE@REPTNAME&"&BI_COMMAND_1-EXPORT_FORMAT=PDF&BI_COMMAND_1-SHOW_EXPORT_DIALOG=X&BI_COMMAND_1-null="
    but when i am clicking this button , its takin me to another window of  cluster administration.
    i am not getting why this is happening.
    and i also don have idea about BI statements. i copied this code( For button) from existing Documents on VC provided on the site

    hi,
    yes i have gone only through this link..
    i tried every way..
    i have created a button using edit toolbar for table control.
    Using system action, and in hyperlink i have put the below formula in 'formula' window.
    i have come across a point i would like to discuss.
    and that is,
    If i enter
    "pcd!portal_content!com.sap.gm.cnt!platform-add_ons!com.sap.ip.bi!iview u1!?QUERY="&STORE@REPTNAME&"&BI_COMMAND_1-EXPORT_FORMAT=PDF&BI_COMMAND_1-SHOW_EXPORT_DIALOG=X&BI_COMMAND_1-null="
    this in formula for the button,
    after clicking the button it takes me to the cluster administration window
    and if i enter
    "pcd!portal_content!com.sap.gm.cnt!platform-add_ons!com.sap.ip.bi!iviews!com.sap.ip.bi.bexwebanalyzer?QUERY="&STORE@REPTNAME&"&BI_COMMAND_1-EXPORT_FORMAT=PDF&BI_COMMAND_1-SHOW_EXPORT_DIALOG=X&BI_COMMAND_1-null="
    it takes me to the exception error.
    if you observe these two ,there is a difference of
    ' com.sap.ip.bi.bexwebanalyzer '  this statement.
    so can you pls tell me wat this will be??

  • Problem with  Non English Chars

    OS : Mac OS
    Java : 1.5.0_07
    Hi,
    i have an Swing application that reads data from a database and shows them in a swing GUI. The text returned by the database is in Arabic and saved in a TextField object.
    But once printed, the arabic chars are screwed up.or actually they r not arabic chars at all!!
    For debugging i also write the result of the query in the console and in a log4j log file.
    There, it is printed in the right form.
    here the code:
    System.out.println("D3"+java.nio.charset.Charset.defaultCharset().name());
    System.out.println("singular "+dit.getData().getSingular());
    log4j,debug("singular "+dit.getData().getSingular());
    Font font = Font.decode("Geeza Pro");
    textl.setFont(font);
    textl.setText(dit.getData().getSingular());
    The output in the console is (and log4j) :
    D3MacRoman
    singular صوف
    The output in the Swing Textfield is
    ������
    If i configure log4j to use UTF8 ,then even into log4j log file the same screwed
    chars are written.
    Looks like i've to tell Swing to use MacRoman, which is the default of the OS and
    the used by the console&log4j. but i don't know how to.
    Any clue??
    Thanks,
    Chris.

    convert your strings to unicode:
    example 1
    import java.awt.*;
    import java.awt.event.*;
    public class ApplicationFrame
        extends Frame {
      public ApplicationFrame() { this("ApplicationFrame v1.0"); }
      public ApplicationFrame(String title) {
        super(title);
        createUI();
      protected void createUI() {
        setSize(500, 400);
        center();
        addWindowListener(new WindowAdapter() {
          public void windowClosing(WindowEvent e) {
            dispose();
            System.exit(0);
      public void center() {
        Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
        Dimension frameSize = getSize();
        int x = (screenSize.width - frameSize.width) / 2;
        int y = (screenSize.height - frameSize.height) / 2;
        setLocation(x, y);
    import java.awt.*;
    public class BidirectionalText {
      public static void main(String[] args) {
        Frame f = new ApplicationFrame("BidirectionalText v1.0") {
          public void paint(Graphics g) {
            Graphics2D g2 = (Graphics2D)g;
            g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                RenderingHints.VALUE_ANTIALIAS_ON);
            Font font = new Font("Lucida Sans Regular", Font.PLAIN, 32);
            g2.setFont(font);
            g2.drawString("Please \u062e\u0644\u0639 slowly.", 40, 80);
        f.setVisible(true);
    example2
    Java Internationalization
    By Andy Deitsch, David Czarnecki
    ISBN: 0-596-00019-7
    O'Reilly
    import java.awt.event.*;
    import java.awt.*;
    import java.text.*;
    import javax.swing.*;
    public class ArabicDigits extends JPanel {
      static JFrame frame;
      public ArabicDigits() {
        NumberFormat nf = NumberFormat.getInstance();
        if (nf instanceof DecimalFormat) {
          DecimalFormat df = (DecimalFormat)nf;
          DecimalFormatSymbols dfs = df.getDecimalFormatSymbols();
          // set the beginning of the range to Arabic digits
          dfs.setZeroDigit('\u0660');
          df.setDecimalFormatSymbols(dfs);
        // create a label with the formatted number
        JLabel label = new JLabel(nf.format(1234567.89));
        // set the font with a large enough size so we can easily
        // read the numbers
        label.setFont(new Font("Lucida Sans", Font.PLAIN, 22));
        add(label);
      public static void main(String [] argv) {
        ArabicDigits panel = new ArabicDigits();
        frame = new JFrame("Arabic Digits");
        frame.addWindowListener(new WindowAdapter() {
        public void windowClosing(WindowEvent e) {System.exit(0);}});
        frame.getContentPane().add("Center", panel);
        frame.pack();
        frame.setVisible(true);
    To avoid having to type all the \u... notation manually, use the native2ascii tool (included with the SDK).
    http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/

  • Problems with non-English character.

    Hi
    My problem is that '������' become rubbish in my c++ dll. I hope someone can help me.
    ---Java code-----
    class Test
         private native void print(String str);
         public static void main(String [] args)
              new Test().print("������");
         static
              System.loadLibrary("Test");
    ----c++ code----
    JNIEXPORT void JNICALL
    Java_Test_print(JNIEnv *env, jobject obj, jstring jstr)
         const char * str = env->GetStringUTFChars(jstr, 0);
         if (str == NULL) {
              return; /* OutOfMemoryError already thrown */
         wstring str2(200, ' ');
         MultiByteToWideChar(CP_ACP, 0, str, strlen(str)+1, (LPWSTR)str2.data(),strlen(str));
         printf(">>> %s\n\n", str);
         cout << " str: " << str << ", str2.data: " << str2.data() << endl;
         return;
    ----OutPut-----
    &#9500;�&#9500;�&#9500;�&#9500;�&#9500;�&#9500;&#9570;str: &#9500;�&#9500;�&#9500;�&#9500;�&#9500;�&#9500;&#9570;, str2.data: 02CB6118
    Thanks
    Fredrik

    Hi,
    You will probably see the same rubbish if you try to print them to the console using java. Dos prompts usually have problems to print the Swedish characters. I don't know if this will affect your c++ dll or not. But you can specify the encoding for the JVM (-Dfile.encoding). For valid encodings see:
    http://www.mindprod.com/jgloss/encoding.html
    /Kaj

  • Problems reading non-english fields from database on Unix platform

    Hello!
    I am trying to get some data(not in English) from the database and write it to the file. I use ResultSet for that perpose. On PC the program runs all right, but when I run it on Unix, I get garbage instead of letters in my output file. I tried different combinations: such methods of ResultSet like getString, getAsciiStream, getBinaryStream, getCharacterStream and encodings like
    String str = new String(rs.getBytes(), "USO-8859-1"/"UTF-8").
    Nothing helps!
    There is either garbage or "?" instead of letters.

    Hi,
    I think that It comes from your unix system that don't support accentuated characters (and your code can't resolve it).
    You can search in this way...

  • Handling Non English language characters in PDF output

    Hi All,
    We have a requirement wherein we have to display an existing Smartform output in a PDF format.
    We have used OTF to PDF conversion and displayed the PDF output in a container.
    The issue is if certain characters are of non english language then the PDF is displaying these characters as special symbols.
    The following string is getting dispalyed in the SmartForm as follows:
    ОРЕНБУРГАВТОРЕМСЕРВИС_ 
    The same string is displayed as follows in the PDF form :
    Any pointers on how to handle such cases would be highly appreciated.
    Thanks in advance.
    regards
    Chaitanya
    9703019495

    Before calling the smartform, use the FM 'SSF_GET_DEVICE_TYPE' and get  the device type based on the language.
    For eg:
      CALL FUNCTION 'SSF_GET_DEVICE_TYPE'
        EXPORTING
          i_language = l_langu
        IMPORTING
          e_devtype  = lwa_output_options-tdprinter.
    Then you need to build the other control parameters like this:
    Build control parameters.
      lwa_control_parameters-getotf  = c_charx.
      lwa_control_parameters-device = 'PRINTER'.
      lwa_control_parameters-preview = ''.
      lwa_control_parameters-no_dialog = c_charx.
      lwa_output_options-tddest = 'LOCL'.
    Pass this lwa_output_options & lwa_control_parameters to output_options & control_parameters respectively in the Smartform FM.
    This should ideally solve this issue.
    Regards,
    Amirth

  • Naming files with non English characters.

    I'm using filemaker to creat PDF's through Acrobat 10.1.12. I need to use Polish, Hungarian, Czech and Slovakian characters in the file name but the characters are not recognised and so the file name will not create. This is for Windows, the problem does not occur on a mac.

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • Upload text files with non-english characters

    I use an Apex page to upload text files. Then i retrieve the contents of files from wwv_flow_files.blob_content and convert them to varchar2 with utl_raw.cast_to_varchar2, but characters like ò, à, ù become garbage.
    What could be the problem? Are characters lost when files are stored in wwv_flow_files or when i do the conversion?
    Some other info:
    * I see wwv_flow_files.DAD_CHARSET is set to "ascii", wwv_flow_files.FILE_CHARSET is null.
    * Trying utl_raw.cast_to_varchar2( utl_raw.cast_to_raw('àòèù') ) returns 'àòèù' correctly;
    * NLS_CHARACTERSET parameter is AL32UTF8 (not just english ASCII)

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

Maybe you are looking for