Renderable Text?

I have adobe acrobat pro 10.  When I make fill in pdfs I have an issue where I can't see what people write in the fields and email back to me, until I "render text" or something like that. That function has disappeared now from my software.  Long ago I had someone at Adobe help me with this issue and they apparently changed my software so now I can't update it.  I could have sworn there was a line that said render text under file or edit. Now it's gone. Could anyone tell me how to see what they have written in the fields.   Not all my clients have Acrobat so they usually just save in another program, fill them in and send them back.  Now this is a huge issue.  Im not even really sure what "renderable text" is but apparently the form my client has filled out and sent back has this.
Thanks for any help

Hi Stephanie Torba,
As i understand, you are facing a issue with pdf forms, you are unable to see what responses your clients have sent you...is that right?
Do you see empty form fields or some text that can't be interpreted very easily?
Regards,
Rahul

Similar Messages

  • Rendered text problems in 5.1.2

    I am working on a film project that has a lot of subtitles. (The dalies are digitized at HD DVC-Pro 720p) I was editing on a station running Final Cut Pro 5.1.1 (g5) with no real problems. I recently took the project with me on a trip to work on using my macbook pro (intel) running 5.1.2. I have discovered that all of my rendered text in my timelines is now uneditable. I can't move the text, I can't change the text, I can't even delete the text. I also can't change any of the footage running underneath the rendered text. It has become an immutable part of my film. This is obviously a huge problem. Has anyone else run accross this problem? Is it something that Apple is aware of? Any fixes? If not I may have to move to Avid and start all over again.

    Not that strange, really. Several different NLEs can sometimes lock onto a render file and not update the output until that render file is deleted.
    Rather simple fix, just a little time consuming.
    Next time don't be so quick to threaten to jump ship. First problem and you were ready to move to Avid. I've had more problems working on Avids than I have on FCP. MANY more problems. FCP problems are usually due to user error or third party hardware. Avid problems shut you down for a couple weeks while you ship out your hardware.

  • How to recognize text in XI Pro when files returns "renderable text" error.

    This same file will recognize text & provide searchable document in Adobe 9.
    I saw workaround to convert each page to TIFF, do OCR on each page, convert each back to pdf & combine. That is ridiculous. Will there be a new version of XI Pro that will work correctly for OCR?

    Varinder.Saini wrote:
    David,
    That is how it is. If you run Searchable Image or Searchable Image (Exact), Acrobat will throw error only for pages that contains renderable text. It also gives an option to ignore these error for any further pages containing renderable text.
    If you check this option it will run OCR for rest of the pages and won't show the error again for that PDF.
    This option is not available when OCR'ing using Adobe ClearScan -- ClearScan being the ONLY reason anyway why I own Acrobat Pro AT ALL -- for it's otherwise ridiculous slowness compared to its competitors. Sorry having to tell you that, from the point of view of having worked in a paperless law office for 15 years, therein 4-9 hrs a day with PDF documents usually in the 600 to 1.800 page range.
    CtDave wrote:
    Precluding the entry of anything 'renderable' assures OCR will be accomplished for each page.
    David Peters wrote:
    Just stop bullshitting users about "you better follow well defined protocols" niminy-piminy-finicky.
    There is zillions of cases where one does not have ANY control neither access to the (external) creation of one's PDF files, even if people like you seem to be unable to imagine, and therefore repeat the same pointless sermon over and over that does not add ANYTHING to the case.
    The solution to this problem by the way is not a question of more (lost) decades of rocket science Adobe bloatware engineering, but simply:
    If there is "Renderable Text" somewhere on some fuⅽʞing page:
    then just SKIP IT and continue OCR with the next fuⅽʞing bitmap
    gosh darn it.
    Not that I would hope that any of this would change anything with the course of the megaton tanker Adobe which I usually avoid like the plague -- with the only two exceptions of ClearScan and of course, Acrobat 7 Pro, the last Acrobat version that was not only fairly usable but actually is a pretty amazing piece of software.

  • Trying to OCR pdf, pdf says it can't perform bc it already contains renderable text-but does not.

    I work for a large agency, and we receive PDF's all the time. 98% of the time I am able to OCR a document with no issues. Just recently I have come across this issue several times, and was wondering if anyone can solve this irritating problem!
    *Acrobat 8.1 - When going to OCR the document, I receive the following message " Acrobat could not perform recognition (OCR) on this page because this page already contains renderable text. However, it does not. When you go to select text or search for anything the whole page is selected (like it's still in a "picture" format, not a document format that you can search, ect.)
    I am not sure if it is how the document is uploaded originally by the other party that causes this, but the only thing I can do as a work-around - is to print out the entire document, scan and then I can OCR the document just fine! The problem is, if the document is 400 pages or so, this can be a huge waste of time, and money just to be able to search the PDF.
    *I have also checked the pdf properties to see if this is some sort of permissions issue, and there are not permissions/security settings in place.*
    PLEASE HELP! Any assistance in this matter would save me a lot of time, and of course (my sanity!).
    Thank you in advance!

    While the alert speaks to "renderable text" that is a simplification. The issue is that you've PDF page content consisting of at least one renderable "character".
    Look at font families - you will observe that there are many characters that are not "text" characters (i.e., linguistic characters).
    So, there's a "renderable character" present. It may be an alpha numeric that has a font color the same as the page background. It may be under the image and thus not visible to the eye.
    You might be able to determine just what is present.
    You could export the page of interest to a text file then view that file.
    You could deplay the page of interest in Acrobat Pro then select the "Content panel" to view the content tree.
    Locate and click on the page number for the page of interest.
    From the Content panel's Options menu select "Highlight Content".
    Walk down the tree. Select the content containers in turn and observe what is highlighted on the PDF page.
    Where might the renderable character come from ? Typically that'd be associated with something in the work flow.
    Not always easy to find so don't take anything in the work flow for granted.
    Be well...

  • Keep getting "renderable text" error when I need to OCR PDF's from FrameMaker.

    My solution has been to individually extract all those pages, then open them up in Photoshop, flatten them and
    widen the canvas size to standard 8.5 x 11.
    But that's a little tedious and time-consuming and you have to delete the original page from your document, after
    importing the OCR-friendly page.
    Is there a printer definition, or something you can set up when you're generating your PDF's in the first place,
    that will get rid of that annoying "renderable text" error?

    Ok...
    I don’t know how it happens but after I save my work in FrameMaker or MS Word, and print
    to PDF for the final output, there are often pages with text in them that isn’t recognizable,
    or that can’t be found with a CTRL+F search.
    That is a serious issue, and one we might be able to help you with, but really, quite separate from the issue here. It's too late to try and fix this once it is a PDF.
    What is it that’s lost when OCR is run?
    Quality. Small file size. Tags (which might be required legally). Almost everything except the basic text, and that might also be lost given that OCR is not guaranteed to work. This is NOT the right way to solve your problem.
    The translator doesn’t have any Adobe products except Reader, so I’m limited to Acrobat
    to show her how the words and pictures are laid out on a page.
    In order for her to copy and paste that text – or search it, to find all of the places where the
    same word might be used – I need to make sure every word is there for her to grab.
    I have heard of translators trying to work with PDFs, and few that succeed. You can reasonably expect a transation service to support FrameMaker. But if they don't I recommend you extract the text from FrameMaker to a simple Word or text file. They should be fine using the PDF as a visual reference, and having the text to translate, and for you to flow back into the original layout. (Again, something I'd expect a full service translation to do themselves, but there are advantages to keeping control too).
    Those were the 2 pages that gave the ‘renderable text’ error. Don’t ask me why or how, they
    look like all the other pages in that document. Except Acrobat thinks they’re scanned graphics,
    that’s how they present when you wave the cursor around in them, hunting for text.
    Renderable text is just text. It means that somewhere on that page there is text. Surely there is layout, page numbers, whatever from FrameMaker on the pages. If not, we really need to look at your production methods - back to the first point.

  • BUG - in flash pro CC, 'bold' and 'italic' properties of TextFormat have no effect on rendered text

    Concise problem statement:
    If you compile with flash pro CC, and use the 'setTextFormat' method of a TextField, the 'bold' and 'italic' properties of the TextFormat argument have no effect on the rendered text. If you compile with flash pro CS6, the 'bold' and 'italic' properties work as expected.
    Apparently, with flash pro CC, the only way to make the text render correctly is to change the font name (add the suffix ' Bold', ' Italic', or ' Bold Italic'.) This means code which dynamically changes font styles only works in CS6 or CC, but not both. For example, if you use the 'bold' property the text renders bold in CS6 and regular in CC, whereas if you change the font name to add the suffix ' Bold', the text renders bold in CC and DOES NOT RENDER at all in CS6. This makes it difficult to transition a team from CS6 to CC.
    Steps to reproduce bug:
    1. Create an xfl with 2 TextFields on the stage, both with font "Trebuchet MS" and style "regular", one named boldTrueText containing the String "bold = true", one named fontNameText containing the String "fontName = Trebuchet MS Bold". Create 2 more TextFields on the stage for visual reference, both with font "Trebuchet MS", one with style "regular", one with style "bold".
    2. Add the following code to the Actions panel on frame 1:
    import flash.text.TextFormat;
    import flash.text.Font;
    var format:TextFormat = boldTrueText.getTextFormat();
    format.bold = true;
    boldTrueText.setTextFormat(format);
    format = fontNameText.getTextFormat();
    format.font = "Trebuchet MS Bold";
    fontNameText.setTextFormat(format);
    var fonts:Array = Font.enumerateFonts(), count:int = fonts.length;
    for (var i:int = 0; i < count; i++) {
        var font:Font = fonts[i];
        trace("fontName: " + font.fontName + ", fontStyle: " + font.fontStyle);
    3. Save, and compile with flash pro CS6 and flash pro CC.
    Results:
    With flash pro CS6, "bold = true" renders bold, and "fontName = Trebuchet MS Bold" DOES NOT RENDER.
    With flash pro CS6, the following is traced:
    fontName: Trebuchet MS, fontStyle: bold
    fontName: Trebuchet MS, fontStyle: regular
    With flash pro CC, "bold = true" renders regular, and "fontName = Trebuchet MS Bold" renders bold.
    With flash pro CC, the following is traced:
    fontName: Trebuchet MS, fontStyle: regular
    fontName: Trebuchet MS Bold, fontStyle: bold
    Expected results:
    The same text is rendered in both flash pro CS6 and CC. I don't know why this behavior was changed in flash pro CC - it causes silent failures in code which dynamically changes font styles. I expected the flash pro CS6 behavior to remain the same in CC, like so:
    With flash pro CC, "bold = true" renders bold, and "fontName = Trebuchet MS Bold" DOES NOT RENDER.
    With flash pro CC, the following is traced:
    fontName: Trebuchet MS, fontStyle: bold
    fontName: Trebuchet MS, fontStyle: regular
    If you don't want to break backward compatibility (any further), you could make both the behaviors work in flash pro CC, like so:
    With flash pro CC, "bold = true" renders bold (font is still "Trebuchet MS"), and "fontName = Trebuchet MS Bold" renders bold also.
    With flash pro CC, the following is traced:
    fontName: Trebuchet MS, fontStyle: bold
    fontName: Trebuchet MS, fontStyle: regular
    fontName: Trebuchet MS Bold, fontStyle: bold
    I submitted this bug with the bug form, and also with adobe bugbase (in case it isn't obsolete) - I'm just trying to maximize my chances of getting a fix.  Has anyone else encountered this bug?

    I just can't believe how there is ZERO documenation for any of this.  Flash's stylesheets have fontStyle and fontWeight properties, but they only recognize regular/italic and regular/bold respectively.
    This change in Flash CC completely breaks systems built in Flash CS6, and the font naming is actually arbitrary and is not a consistant combination of font name and style (e.g. "Eras ITC" family's bold font name is "Eras Bold ITC", but the bold version of Times New Roman is "Times New Roman Bold" (with Bold at the end, rather than the middle), and what's absolutely appaling is that the font name used at runtime is not exposed anywhere in the Flash IDE!!!  In the IDE you select a font family and font style independently, which is absolutely not what's used at runtime, because it actually uses a separate, arbitrarily named field in the font file for the font name. So we can't even know from within Flash what the proper runtime name is, unless we trace it out or open the font properties details tab in Windows explorer.
    It seems that Flash CC is always using the font "Title" that can be found in the properties of the font, NOT the font name displayed in Windows Font Preview or in Flash CC.  For example, the font name for Times New Roman Bold in Windows Font Preview is just "Times New Roman", but the font title in the properties/details tab is "Times New Roman Bold".  If they made the change to allow for specific fonts to be selected, that's fine, but it completely breaks HTML support in TextFields if it's not respecting bold and italic tags.
    This may actually be a trend on the web now, if you read this: http://www.smashingmagazine.com/2013/02/14/setting-weights-and-styles-at-font-face-declara tion/ , it says: "If you’ve used one of FontSquirrel’s amazing @font-face kits, then you’re familiar with this approach to setting weights and styles. The CSS provided in every kit uses a unique font-family name for each weight and style, and sets the weight and style in the @font-face declaration to normal. [...] Notice that the font-family names are unique, with each font-family name accessing the appropriate Web font files."
    But there's just no mention of this in any documentation I can find.  What the hell.
    It's also helpful to realize that font and u tags have been deprecated in HTML5, while b and i tags have been repurposed since they still retain semantic meaning apart from style: https://www.w3.org/International/questions/qa-b-and-i-tags

  • Rendering text

    I am rendering a video that contains a substantial amount of text, perhaps accounting for 50% or more of the entire scene content.
    It is agonizingly slow, leading me to believe that rendering text is significantly slower than captured material.
    Is this correct?
    Thank you.

    Which text generator(s) did you use?
    I haven't found that the text generators require noticeably more rendering time than most other transitions or effects.
    You do have +a lot of text+ in your sequence - +" ... 50% or more of the entire scene content ..."+ - that's a lot of material to have to generate & render.

  • Renderable text and indexing

    My ultimate goal is to have as complete a “Full Text Index” as possible.  To that end, I have a couple of questions.   I have both scanned pdfs and pdfs that have been created from word or powerpoint and have renderable text.
    for those files with renderable text
    If a page has renderable text, is all of the text included when the “Full Text Index” is built?
    If a page has renderable text and an image, is the text within the image ignored when the “Full Text Index” is built?
    If I OCR the document, if a page has renderable text and an image, is the image ignored during the OCR process?
    If I OCR the document, if a page has an image and no renderable text, is the text within the image recognized during the OCR process?
    Thanks.

    1. If a page has renderable text, is all of the text included when the “Full Text Index” is built?
    The key question is "Do the renderable texts' fonts map to Unicode?" Fonts that map to Unicode are searchable.
    Such will be harvested by the Catalog index.
    Using Acrobat Pro you can create a preflight to check for this.
    2. If a page has renderable text and an image, is the text within the image ignored when the “Full Text Index” is built?
    Ok, a scanned image of text has no "text" just pixels that look like text - it is all just an image.
    So, nothing to be harvested by the Catalog index. The page's renderable text will be harvested by the Catalog index.
    3. If I OCR the document, if a page has renderable text and an image, is the image ignored during the OCR process?
    A page of a PDF that has renderable text cannot be OCR'd.
    Provided the page's content is an image containing pixels representing characters OCR will attempt to recognize these and provide an output.
    If Searchable Image or Searchable Image (Exact) is used the "recognized" output is a hidden/invisible layer (text rendering mode 3 - no stroke, no fill).
    The scanned image remains on the page. Searchable Image 'tweaks' the image. Searchable Image (Exact) does not.
    So, for both you have the scanned image and the hidden layer of OCR output.
    Alternatively, if you use ClearScan recognized characters (in the image) are replaced with an Acrobat generated font.
    Anything not recognized is left as a bitmap of the 'character'.
    So, the image is not ignored by OCR as it is what OCR analyses to recognize characters for providing an output.
    A Catalog index will harvest the OCR output of any of the three OCR modes.
    4. If I OCR the document, if a page has an image and no renderable text, is the text within the image recognized during the OCR process?
    OCR will attempt to recognize the pixels that represent characters and when recognized provide an output.
    A Catalog index will harvest the OCR output.
    Be well...

  • OCR renderable text error

    someone was having the problem below with an older version of acrobat.
    is there now a solution in acrobat mac x?
    i note that exporting to image file loses quality and increases file size
    thanks
    Well, since this is the digital age, it makes sense that I ought to  read the PDFs in digital form (this is a stretch for me, I really like  paper), which is facilitated by a tablet since I can actually see the  page when it’s in the portrait configuration.  It also makes sense that I  ought to mark up the file in Acrobat, using the native highlighting and  searching tools, which is also facilitated by the tablet for obvious  reasons.
    Here’s the problem.  Apparently *every* PDF file, in every digital library, is tagged with headers, or footers, or bates numbers, or some other tag that halts the OCR recognition of the PDF file.   If you google “This page contains renderable text”, you’ll see that  this has been a complaint since Acrobat 6 at least.  So you can’t just  OCR the document and get a nice,  mark-up-able document.
    Now, I know what you’re thinking.  There has to be a workaround,  right?  Of course, there is.  You can manually remove the headers and  try again.  Oh, now there’s a footer; you can take that out too  (manually) and try again.  Oh, now there’s a bates number, okay, take  that out too.  There’s STILL some renderable text in there somewhere,  well, now you can either try and edit out the blocks of renderable text  (again, manually, made more entertaining by the fact that you can’t just  right click on the page and say “remove renderable text”), or you can  export the entire document to a graphics file (say, a TIFF), re-convert  it to a PDF file (which turns the entire document into a rasterized  image), and THEN run the OCR tool to get an actual mark-up-able  document.  This process is made more enjoyable by the fact that Acrobat  will turn that 300 page dissertation you’re reading as part of your  research into 300 distinct TIFF files, which you then need to recombine  into a PDF file.  Multiply this by 100, and you’ll see what sort of a  barrier to productivity this is for me to get started organizing my  existing document collection.
    This is CLOSE TO THE DUMBEST THING I HAVE EVER SEEN.  And I’ve seen a  LOT of bad design.  Rather than prompting me “This document has  renderable text” and giving me “Cancel” as the only option, any  feature-driven developer would say, “Gosh, people get really frustrated  by this.  I know, because I can read the results of a simple google search.    We need to change this right away!  Here, I’ll make it so that you  can just click ‘Treat existing renderable text as white space’ or even  prompt the user to rasterize the renderable text and embed it in the  document, then OCR the resulting file!”
    The only conceivable reason I can imagine that this hasn’t taken  place is because your lovable electronic document vendor wants to make  it a colossally, enormously painful process for someone to actually do anything to the document they’re providing you to use.  Thank you, electronic  document vendor.  You’re going to be wasting about 20% of the time that  you’re saving me by giving me electronic access to this document in the  first place.
    Progress is grand.  Collide it with self-interest, progress seems to lose out more often than not.
    Now, if you’ll pardon me, I’m going to go get some sleep.  Then I’m  going to get up in the morning and go to work.  Then I’m going to come  home, and instead of enjoying some family time with my kids, I’m going  to fart around with manual document conversion.

    Elias,
    I completely agree with your anger. I ran into the same problem and I think I have figured out a workaround. I wrote up a blog post about it.
    http://www.ideationizing.com/2011/03/ocr-acrobat-pdf-with-renderable-text.html
    I hope this works for you.

  • OCR Renderable Text and Print to PDF Problems

    I have Adobe Acrobat version 8.1 on my PC laptop and have recently had trouble with OCR at work. When I try to run OCR I get a pop-up box telling me Acrobat cannot perform the function because of renderable text, however if I run OCR on the same file using my desktop PC (which has version 10 installed) I have no trouble. Also, I am no longer able to print items (such as a webpage) to PDF, and instead receive a pop-up box telling me something about the AdobePDF.dll file being missing or not functioning, and an error message when I double click the printer icon at the bottom right corner of my screen. I uninstalled Adobe Acrobat and then reinstalled it yesterday, however it did not fix either problem. Does anyone have any suggestions?
    Thanks!

    what windows version you are using? I think your problem might be due to corrupt operating system. Try reinstalling the OS and reinstall Acrobat soon after.
    Good luck!

  • Acrobat could not perform recognition (OCR) on this page because this page contains renderable text

    I have a pdf file, which was OCRed by some other software, I am not satisfied with its accuracy, and would like to run adobe acrobat to OCR it again.
    But Adobe says "Acrobat could not perform recognition (OCR) on this page because this page contains renderable text" for each page of the file.
    How can I do to re-OCR the file? Thanks!

    CtDave,
    I am having a similar problem and tried your suggestion but I still get the same message. Any other thoughts or fixes? The difference in my file is I used to be able to edit the text in the PDF file. For the last few months I am no longer able to edit this file.

  • Ocr  error This page contains renderable text

    First time caller,first time user: acrobat 8.0 pro, i purchased acrobat to begin to ocr a lot of pdf's i've made. this first one i've tried to ocr gave me the error message: Acrobat cannot ocr this page, This page contains renderable text. what does this mean and is there a workaround to allow the page to be ocr'd.
    I'm not trying to turn it into a word document, I just want to have searchable text, so i believe i choose searchable image, english and 600 dpi, but no dice.
    Roxylee

    I keep coming across this renderable text issue in documents I am processing for the web, and frankly, the TIFF work-around is not adequate. We pay a lot for Adobe Professional, and expect more than such a lame work-around which hinders the output quality. In addition, this is a tedious solution for large documents as well.
    I have found several other posts on the web about this, with some references to Adobe updates fixing this problem. I have tried them all and none work. Very frustrating.

  • Premiere rendering text with question mark

    my Premiere rendering text with question mark. it looks fine at first but then it changes. Ow! the text file is a after effects files that I opened up in premiere.

    thats okay i found the problem. it was the text in after effects. the box around the text was to small.

  • Invoke-WebRequest - Unable to get rendered text

    Hi
    This issue exists in powershell but I suspect this issue will exist no matter what language is used.
    The problem is the rendered web page has chat text.
    However no matter what I do in powershell I can't get to this text.
    If I do a "view source" in IE or Chrome the text is missing, if I pipe the parsedhtml to a file, it is missing,
    So below you can see the original screen and F12 debug says the text lives inside "rooms-view-right-pane"
    Visibly you can see it, programmatically you cant see it
    >> So what is the magic that is missing , stopping me from capturing the rendered text ?
    Thanks in advance
    <div class="rooms-view-right-pane">
    </div>
    </div>
    </div>
    </div>
    </div>
    </div>

    Well for what it's worth, you can have what is below. I ain't proud of it - it's pretty dodgy, but in this example it retrieves points for a profile. The number of points does not appear in the source, only in the generated source. 
    It would be easy to keep it checking the page and comparing the new value for the points with the old...
    Like I said, there has to be a better way.
    GenSource.hta
    <html>
    <head>
    <title>GenSource HTA</title>
    <HTA:APPLICATION ID="oHTA"
    APPLICATIONNAME="GeneratedSource"
    SHOWINTASKBAR="no"
    ICON=""
    CONTEXTMENU="no"
    SCROLL="no"
    SCROLLFLAT="no"
    SELECTION="no"
    SINGLEINSTANCE="yes"
    SYSMENU="no">
    <meta http-equiv="x-ua-compatible" content="ie=9">
    <style type="text/css">
    <!--
    html {font:100%;}
    body {text-align:center;font-family:Arial,sans-serif;margin:0;padding:0 0 30px;filter:progid:DXImageTransform.Microsoft.Gradient(GradientType=0, StartColorStr='#0099ff', EndColorStr='#000000');}
    #wClose {width:15px;height:16px;float:right;margin:0 10px 0 0;padding:0;color:#ddd;background:transparent;border:1px solid #ddd;}
    #wClose:hover {color:#f00;}
    -->
    </style>
    <script type="text/jscript">
    window.onload=function(){
    window.resizeTo(500,350);
    window.moveTo ((screen.width/2-204),(screen.height/2-200))
    wClose.onclick=function(){close();};
    document.all.frame1.src="https://social.msdn.microsoft.com/profile/greg%20b%20roberts/?ws=usercard-mini";
    function frameloaded(){
    var frameEl=document.frames["frame1"].document.getElementById("points");
    if(!frameEl||frameEl.innerHTML=="-"){setTimeout(frameloaded,500);}
    else{
    el.innerHTML=document.frames["frame1"].document.getElementById("displayName").innerHTML;
    el2.innerHTML=frameEl.innerHTML+" points";
    </script>
    </head>
    <body>
    <div id="wClose">X</div>
    <iframe width="500" height="200" id="frame1" onload="frameloaded();">Test Page</iframe>
    <div id="el" style="font-size:20px;color:#fff;width:500px;height:30px;margin-top:20px;"></div>
    <div id="el2" style="font-size:20px;color:#fff;width:500px;height:30px;margin-top:20px;">Loading...</div>
    </body>
    </html>

  • FF4 print to PDF (Acro Pro) renders text as image (FF3 did not)

    When I create a PDF with Acro Pro using FF4 the text is rendered as an image. FF3 did not do this (other browsers render text as text as well). Win 7 Pro 64-bit, FF 4.0

    Oh man, you really made my day!!
    This nasty error has been torturing me for several weeks now.
    I installed and reinstalled and again all Adobe software, with no effect, I checked and rechecked all settings with other pc's since they all made perfect PDF's.
    But not this one!!
    Unchecking the "hardware acceleration box" did the trick.
    Who would have thought of that one!!
    Just look at my happy face now.
    Mozilla should really fix this weird bug.

Maybe you are looking for