Reading non English pdf in Ipad2

I just bough my ipad2 last week, when I start loading my ibook with my pdfs the problem started.
First all these PDF are none english herbew pdfs.
       all of them are displayed perfectly in PC,
       all of them apear as Blank pages.
       I send them through google email to myself then open them in iBook
I read so many suggestions that my not be worked in my case
1- Open and Save the pdf: I do not have Mac to do that
2- use other pdfs reader like GoodReader: although I did not try this but why this might work, can't see any difference from ibook
One method that worked for few of them is copy paste the text in MS-Word then create new PDF from MS-Word. but the method failed with many of them and did not work at all if there are images.
Any tips, suggestions are really appreciated, will Galaxy be the solution?

ErinyBasem wrote:
2- use other pdfs reader like GoodReader: although I did not try this but why this might work, can't see any difference from ibook
As for why GoodReader or something other than iBooks (which probably has the worst of all pdf readers in general), might work, when iBooks does not, there are various reasons, none of which you could ever "see" without trying it.
Is there another way you could try transferring them, either via iTunes or a different email provider or downloading directly via an app like Goodreader?

Similar Messages

  • Problems reading non-english fields from database on Unix platform

    Hello!
    I am trying to get some data(not in English) from the database and write it to the file. I use ResultSet for that perpose. On PC the program runs all right, but when I run it on Unix, I get garbage instead of letters in my output file. I tried different combinations: such methods of ResultSet like getString, getAsciiStream, getBinaryStream, getCharacterStream and encodings like
    String str = new String(rs.getBytes(), "USO-8859-1"/"UTF-8").
    Nothing helps!
    There is either garbage or "?" instead of letters.

    Hi,
    I think that It comes from your unix system that don't support accentuated characters (and your code can't resolve it).
    You can search in this way...

  • Can't seem to save non-English as text from PDF using Reader

    I have several PDF documents that were originally generated by OpenOffice from a UTF8-encoded text file. The text is in different languages, e.g. Korean, Arabic, Russian, English. When I open these documents and then "save as text", the resulting text files contain garbage or nothing at all in all cases except for English. Is it possible to extract non-English text from a PDF document using Reader? If not, is there a different product that could be used for this purpose? Thanks much!

    They're using fonts that you don't have on your system so no, it isn't possible with Reader.

  • Printing ALV list with ADS (pdf printer) in non-english charset

    Hello!
    I have an issue about printing alv list with pdf printer in non-english charset. We have two printers. One for alv lists (SWINCF: Casc.Fonts SAPWIN Unicode) and one for pdf forms(adobe document service). I want to use one printer for any documents. But PDF printer prints non-english charset like ########.
    What can I do ?

    Hi, Roman!
    I want to use PDF printer for both types of output. I have a dedicated java instance for ADS.
    There is a device type for our Kyocera Printer. My pdf printer prints ALV list good exept russian charset.
    It prints like #####

  • Preview problems: PDFs that the computer had read as English are now read as gibberish

    I download a PDF, and I open it with Preview on my Mac or GoodReader on my iPad (all synced with SugarSync): initially it's text; I can search for text in the PDF, I can highlight text and copy it, and all is well.
    Then, I make annotations, maybe once, maybe a few times. Suddenly some PDFs, while still looking the same to me, are no longer "read" as English text by the iMac/iPad: when I search for a term that's obviously on the page in front of me, it's not found. If I highlight text from the PDF and copy it, the copying doesn't work - sometimes it's gibberish, sometimes it's just blank. Note that it's not read as a picture - I can still highlight words and phrases - but the computer/iPad doesn't recognize those phrases. "Eric" for some reason has become I7CK;BH$:;B7DO. To me, it's still totally readable - it's just that the computer doesn't copy/read it as English.
    Has anyone had similar issues?
    I've tried using PDFPen to "re-OCR" the document, but it won't let me do that, I suppose because the computer still reads the file as text - just not English text. I haven't yet figured out where in my workflow this is happening - the different steps are I suppose annotating in Preview (on the iMac), annotating in GoodReader (on the iPad), and syncing with SugarSync. At the moment I think it's editing with Preview first that somehow renders the file non-English, but I'm not sure of that at all - nor do I know if this is happening after one time annotating, or more.
    I almost always make duplicates of PDFs (princially academic articles) before annotating, so I can usually go back to original clean PDFs, but it's more than a little annoying that all the previously-English annotations are suddenly unreadable.
    This thread from the archive ...
    Copy and paste from PDF in Preview results in gibberish
    ... seems to be related, but doesn't seem to have any real solutions, unfortunately.
    Two things I'm mainly concerned with: (1) stopping it from happening again; (2) taking the now-gibberish annotated PDFs "backwards" so they become English again.
    Thanks in advance for any help you can offer!

    First, please find a window based PC with iPod updater pre-installed, connect your iPod with it and click Restore, please note it really does not matter it can complete the process or not. Wait for 5 minutes and reconnect it with your Mac (may be you need to put it in disk mode). Do the following
    For Mac computer
    1. Open the disk utility, hope your iPod appears there (left hand side), highlight it
    2. Go to Tab “Partition”, click either “Delete” or “Partition”, if fails, skip this step and go to 3
    3. Go to Tab “Erase” , choose Volume Format as “MAC OS Extended (Journaled), and click Erase, again if fails, skip it and go to 4
    4. Same as step 3, but open the “Security Options....” and choose “Zero Out Data” before click Erase. It will take 1 to 2 hours to complete.
    5. Eject your iPod and do a Reset
    6. Open the iPod Updater and click “Restore”

  • How to load file thru reader which contains non-english char in file name

    Hi ,
    I want to know how to load file in english machine thru reader which contains non-english chars in file names (eg. 置顶.pdf)
    as LoadFile gives error while passing unicode converted file name.
    Regards,
    Arvind

    You don't mention what version of Reader?  And you are using the AcroPDF.dll, yes?
    Sent from my iPad

  • Problem in converting Spool to PDF file, having non-English characters

    Hi All,
            I have problem in converting Spool to PDF format.
    Scenario : I have a spool which has non-English characters. I am using CONVERT_ABAPSPOOLJOB_2_PDF  FM to perform conversion. But my output is having junk values( ie # ) for non-English characters. Any pointers to solve this issue will be appreciated.
    I even tried with report RSTXPDFT4 , it also gives me the same junk characters.
    Regards,
    Navin.

    Hi All,
            I have problem in converting Spool to PDF format.
    Scenario : I have a spool which has non-English characters. I am using CONVERT_ABAPSPOOLJOB_2_PDF  FM to perform conversion. But my output is having junk values( ie # ) for non-English characters. Any pointers to solve this issue will be appreciated.
    I even tried with report RSTXPDFT4 , it also gives me the same junk characters.
    Regards,
    Navin.

  • Can't able to print Non English data properly in PDF Output

    Can't able to print non english data properly
    Example:
    I want to print the germen data (Währung),But it printing like this (Währung) in PDF out put
    What the changes i have to do to achive this
    Thanks&regards
    yamini

    Hi Yamini,
    I think this happens when you look at utf-8 in Windows wiith another character set. I've noticed that the XML file (and hence the PDF output) displays those characters when I preview reports on the BI desktop, but the output from the server is fine, so I don't worry about it.
    Tore.

  • Compatibility of PDF library on non-english OS?

    We have a function in our acrobat plug-in to copy some data to the clipboard using native clipboard
    methods like 'SetClipboardData'. Now, this copying works all find on a english OS, whereas fails on
    a swedish OS.
    The plug-in is referreing to datalogics 8.1 library.
    We are also facing an issue when signing on an already signed PDF when opened on swedish OS.
    Any ideas what might be the cause of failure? Or any comments on the compatibility of pdf library
    with non-english OS would give us some pointers.

    Post your questions in the forum for Acrobat SDK.

  • Problem with Non-English Fields Output to PDF by JASPER in JDev10.1.3

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

  • Reading a non-english character

    Hi, I have a trouble with reading a non-english character from a html page.
    I'm taking the word from the html page, and compare it with itself,
    like this
    string.equals("BİTTİ")
    but it returns false.
    is it possible to correct this?

    specify an encoding for your inputstream reader:
    BufferedReader in = new BufferedReader(
                new InputStreamReader(new FileInputStream("infilename"), "8859_1")); for example

  • Handling Non English language characters in PDF output

    Hi All,
    We have a requirement wherein we have to display an existing Smartform output in a PDF format.
    We have used OTF to PDF conversion and displayed the PDF output in a container.
    The issue is if certain characters are of non english language then the PDF is displaying these characters as special symbols.
    The following string is getting dispalyed in the SmartForm as follows:
    ОРЕНБУРГАВТОРЕМСЕРВИС_ 
    The same string is displayed as follows in the PDF form :
    Any pointers on how to handle such cases would be highly appreciated.
    Thanks in advance.
    regards
    Chaitanya
    9703019495

    Before calling the smartform, use the FM 'SSF_GET_DEVICE_TYPE' and get  the device type based on the language.
    For eg:
      CALL FUNCTION 'SSF_GET_DEVICE_TYPE'
        EXPORTING
          i_language = l_langu
        IMPORTING
          e_devtype  = lwa_output_options-tdprinter.
    Then you need to build the other control parameters like this:
    Build control parameters.
      lwa_control_parameters-getotf  = c_charx.
      lwa_control_parameters-device = 'PRINTER'.
      lwa_control_parameters-preview = ''.
      lwa_control_parameters-no_dialog = c_charx.
      lwa_output_options-tddest = 'LOCL'.
    Pass this lwa_output_options & lwa_control_parameters to output_options & control_parameters respectively in the Smartform FM.
    This should ideally solve this issue.
    Regards,
    Amirth

  • Reading .txt file and non-english chars

    i added .txt files to my app for translations of text messages
    the problem is when i read the translations, non-english characters are read wrong on my Nokia. In Sun Wireless Toolkit it works.
    See the trouble is because I don't even know what is expected by phone...
    UTF-8, ISO Latin 2 or Windows CP1250?
    im using CLDC1.0 and MIDP1.0
    What's the rigth way to do it?
    here's what i have...
    String locale =System.getProperty("microedition.locale");
    String language = locale.substring(0,2);
    String localefile="lang/"+language+".txt";
    InputStream r= getClass().getResourceAsStream("/lang/"+language+".txt");
    byte[] filetext=new byte[2000];
    int len = 0;
    try {
    len=r.read(filetext);
    then i get translation by
    value = new String(filetext,start, i-start).trim();

    Not sure what the issue is with the runtime. How are you outputing the file and accessing the lists? Here is a more complete sample:
    public class Foo {
         final private List colons = new ArrayList();
         final private List nonColons = new ArrayList();
         static final public void main(final String[] args)
              throws Throwable {
              Foo foo = new Foo();
              foo.input();
              foo.output();
         private void input()
              throws IOException {
             BufferedReader reader = new BufferedReader(new FileReader("/temp/foo.txt"));
             String line = reader.readLine();
             while (line != null) {
                 List target = line.indexOf(":") >= 0 ? colons : nonColons;
                 target.add(line);
                 line = reader.readLine();
             reader.close();
         private void output() {
              System.out.println("Colons:");
              Iterator itorColons = colons.iterator();
              while (itorColons.hasNext()) {
                   String current = (String) itorColons.next();
                   System.out.println(current);
              System.out.println("Non-Colons");
              Iterator itorNonColons = nonColons.iterator();
              while (itorNonColons.hasNext()) {
                   String current = (String) itorNonColons.next();
                   System.out.println(current);
    }The output generated is:
    Colons:
    a:b
    b:c
    Non-Colons
    a
    b
    c
    My guess is that you are iterating through your lists incorrectly. But glad I could help.
    - Saish

  • Download IR to PDF Non-English Characters

    Hi,
    I' m trying to export an interactive report to pdf. The report columns are in greek and therefore they are displayed as hash characters (#) to the downloaded pdf. While downloading to CSV, I was able to export the report correctly by changing the Application Primary Language Globalization variable to Greek but this doesn't seem to affect the pdf as well. Any ideas would be appreciated!
    Thanks in advance

    Hi,
    I am fine with english characters A-Z, a-z or 0-9 or special characters. But it contains some chinese, japanes or non-english language characters which I dont want.
    The logic explained by you above would expect me to list all the valid characters. Also it would be a performance constraint. Hence i wanted something as FM or standard procedure. Can we use ASCII somehow ?
    Regards,
    Nirmal

  • Question marks in PDF for non-english characters.

    I'm get report from APEX 3.0.1 (Default Report Layout) with BI Publisher 10.1.3.3.1 Base.
    In Adobe Reader 7.0.8 instead of non-english(cyrillic) characters see question marks.
    How to tune BI Publisher?

    After installation BI Publisher 10.1.3.3.1 Base (standalone, OC4J) :
    Directory of F:\bip\jdk\lib\fonts
    13/10/2007 21:16 15 196 128R00.TTF
    13/10/2007 21:16 18 473 348 ALBANWTJ.ttf
    13/10/2007 21:16 18 777 132 ALBANWTK.ttf
    13/10/2007 21:16 18 676 084 ALBANWTS.ttf
    13/10/2007 21:16 18 788 600 ALBANWTT.ttf
    13/10/2007 21:16 276 384 ALBANYWT.ttf
    13/10/2007 21:16 12 860 B39R00.TTF
    13/10/2007 21:16 18 800 MICR____.TTF
    13/10/2007 21:16 6 580 UPCR00.TTF
    Directory of F:\bip\jdk\jre\lib\fonts
    01/08/2006 19:25 75 144 LucidaBrightDemiBold.ttf
    01/08/2006 19:25 75 124 LucidaBrightDemiItalic.ttf
    01/08/2006 19:25 80 856 LucidaBrightItalic.ttf
    01/08/2006 19:25 344 908 LucidaBrightRegular.ttf
    01/08/2006 19:25 317 896 LucidaSansDemiBold.ttf
    01/08/2006 19:25 698 236 LucidaSansRegular.ttf
    01/08/2006 19:25 234 068 LucidaTypewriterBold.ttf
    01/08/2006 19:25 242 700 LucidaTypewriterRegular.ttf
    Directory of F:\bip\jre\1.4.2\lib\fonts
    24/03/2004 19:12 75 144 LucidaBrightDemiBold.ttf
    24/03/2004 19:12 75 124 LucidaBrightDemiItalic.ttf
    24/03/2004 19:12 80 856 LucidaBrightItalic.ttf
    24/03/2004 19:12 344 908 LucidaBrightRegular.ttf
    24/03/2004 19:12 317 896 LucidaSansDemiBold.ttf
    24/03/2004 19:12 698 236 LucidaSansRegular.ttf
    24/03/2004 19:12 234 068 LucidaTypewriterBold.ttf
    24/03/2004 19:12 242 700 LucidaTypewriterRegular.ttf
    What is wrong?
    In Adobe Reader's Document Properties -> Fonts
    +Helvetica:
    Type: Type1
    Encoding: Ansi
    Actual Font: ArialMT
    Actual Font Type: TrueType
    I feel BIP use wrong encoding . . .

Maybe you are looking for

  • How can I print to a IP address printer from my iPad and a iMac

    I have a HP Laser Jet 1100 that I have converted to a Networked printer via TP link print server so I wont need a computer running 24/7, how can I get my iPad 4th Gen and a iMac to print on it, IP 192.168.1.13 Thanks

  • Auto increment number with leading zeros

    hello, i have data column (serial number) that auto generate from script component, i'm using counter +1 and it successful store the number to my db. but the problem is there any idea to store the number to 00001 rather that 1. im using this method i

  • Command line for DISCOVERER 3.1.37

    I try to found the instructions about the command line for DISCOVERER 3.1.37 (how to activate with the username/password with the address of a workgroup on a file server) Where can i find informations about Command line mode and examples ?

  • How to delete in Webstart the history of the address bar?

    Hello, each jnlp-URL called in the Webstart Viewer of Java 1.4 (and probably at the higher versions, too) usually is added to the history of the Webstart address bar, displayed by clicking the dropdown button right. How can this addess bar history be

  • InDesign CS3 Print PDF (page printed shifted 0.125 downward)

    Hi, I'm currently using CS3. Everytime I print the booklet/catalog to pdf file, the whole document seems to be shifted down .125in in the printed pdf file. The files looks fine when I view in preview mode in InDesign, it only does it when I print pdf