Searchable Image vs. Searchable Image (Exact) - Quality of OCR

I work in the legal field, and have always used the "Searchable Image (Exact)" setting when running OCR in-house on document productions.  I'm currently using Acrobat X when I do my own OCR. 
I have a vendor who says they use Acrobat 9 "Searchable Image" for all their document productions.  They say even though the actual image of the document is altered, they get better quality OCR results than they do with "Searchable Image (Exact)."  They say the document is deskewed so text can be read better by the OCR engine. 
My problem is that many documents are being altered.  Especially architectural drawings are tilting drastically to the right, the edge of the drawing is being clipped off entirely from the document, and dotted lines are being added to the image itself -- apparently where the edge of the page used to be.  This is unacceptable.  They say the entire drawing is being flipped so that the first few lines of text on the drawing is horizontal.  So if there is a small bit of text which is slanted, the entire drawing is tilted to adjust to that one piece of text. 
Is it true that OCR is that much better using "Searchable Image" vs. "Searchable Image (Exact)"?  I have to select one setting for all docs, and I'm inclined not to alter our documents in any way at the expense of OCR. 
They also say that they're having trouble OCRing docs that I can OCR without a problem using both Acrobat 8 and Acrobat X.  Does that make any sense?  I switched directly from 8 to X so I'm not familiar with Acrobat 9.

Finding PDF that have / lack OCR.
You can utilize Acrobat's Preflight(s) for this.
Regarding a PDF that won't take to OCR (and has no renderable text, although text is present).
This can happen. Occasionally I have to deal with a few of these. In all cases the source PDF came out of a non-Adobe application.
Most annoying. As the PDF is all I have I resort to a "refry" to a tabloid page size with appropriate orientation and at 400 ppi.
OCR "takes". A lesson learned - for every best practice (don't refry PDF) there can be exceptions.
Some observations (of, in my opinion, egregious activity).
--| Reprocessing to letter, portrait
Or is this a contractually specified page size/orientation?
If so, it is driving the production of problematic deliverables.
--| Refry of good PDFs to images to support OCR (for "search") 
Many of your "source" PDFs have renderable text that will support search.
Example:
-- --| PDFs sourced from Word or Excel possess renderable text.
When done properly the fonts / font sub-sets are embedded and map to Unicode. This assures the PDFs' page content is searchable "as-is". Refry of a PDF to obtain an image so one can then OCR "to make searchable" is egregious.
--| PDFs of scanned images at the source paper's page size.
Why refry by printing to a smaller page size? For usability, as a true and accurate copy of the hardcopy one wants the appropriate page size and orientation. These days most 'discovery' work is done via shake 'n bake of the eDoc not the hardcopy (*).
(*) If one needs hardcopy then print it. One can create imprints to paper in a plethora of ways.
The imprint can be sized to fit the desire paper size. Of course going down in page size for a scanned image means what's on paper is not too usable. But, doing so with a PDF from CAD that is vector graphic is something else. Yes, a 44 x 34 from Microstation is "smaller" on a sheet of letter size paper. But, the imprint is crisp and usable. If you're a geezer like me you'll want a magnification "glass". But, oh my, that imprint is sweet and is usable.
--|  PDF output of CAD.
Reprocessing into an image for OCR.
Most CAD applications in use can output PDF. Text will present in one or more layers on the output.
Unless the CAD fonts are of some in-house branding that fails to comply with proper "font" practices all this will be renderable. With proper font selection the font families used will map to Unicode. Thus "searchable".
If one's client does not want a PDF with layers then use an Acrobat Pro Preflight to flatten them.
Typically the text present in the PDF won't be embedded. If needed use the Acrobat Pro Preflight to embed them.
Note that some CAD output will have vector graphics (CAD file content created with the CAD application) and raster (an image). Typically legacy drawings are scanned and this image brought into the CAD file for the drawing. A designer then redraws/remasters using vector graphics. Often this is incremental (it is a time consuming activity) to process approved drawing revisions as these come in. The output PDF will have the renderable text that is associated with text placed via the CAD application. The image of text from the source raster will not be searchable. One *does not* reprocess such a PDF to an image to OCR for "search".
More (much more) is lost.
Do the trials and review what you get compared to what you had. This will validate my statement.
(I'm confident on this because I've done it -- that "what if" itch I just gotta scratch <grin>.)
A Note: OCR of "mixed" content (say a chart, plot, scanned CAD drawing having text, lines, curves, etc.) rarely yields useful OCR output. Scan a collection of these. OCR. Export / Save As to a text file. Do the compare/contrast.  Or view the "hidden" text (Acrobat lets you do that). This can be a most informative exercise.
--| Always the same .... no one complains
One could write volumes on this. But then I suspect that's already been done.
~~~~~~~~~~~~~~~~~
Some nattering on my part.
An eDiscovery firm that has no "eDoc wranglers" is in harm's way.
(Actually, the wrangler is something of a joat. Competent understanding of the core discpline and compent understanding of  the workflows to which the "eDocs" are subjected to.
Being a programmer/developer is not a prerequisite.  -- You know, like it or not you've already joined the "wrangler" posse.)
If what you've described resides in content provided to clients it becomes "when" not "if" a client gets to hold the bag. This may be minor; it may be major. Regardless client ire will tend to be directed towards the eDiscovery firm not the firm's vendor(s).
Be well...

Similar Messages

  • Acrobat XI Scanning and OCR Image Exact

    When scanning with Acrobat XI how come I am missing the OCR feature for Searchable Image Exact?
    It only has clearscan, and Searchable Image....

    Don't have the scan profile do OCR.
    OCR after you have the scanner's output image in the PDF file.
    Then OCR Searchable Image (Exact) is available as a choice when you initiate OCR.
    Using Acrobat XI Pro you can build an Action that calls out the use of Searchable Image (Exact).
    OCR a directory of PDFs that hold the scanner output images.
    Close out the Action with a Save As.
    Be well...

  • How to fill up image exactly with other image

    I have two pictures of a pigeon. The background image is the ideal posture of a pigeon. The foreground image was taken by me and is not the ideal posture of a pigeon. I want to transform the foreground image so that the foreground image matches the background image exactly. What are common techniques or best practices in Photoshop to do this? I have tried to do this with CTRL + T with warp modus, but i don't know how to use this without disfigure the other parts of the pigeon. I have tried puppet warp as well, but i don't know how to use this in this case.
    Thanks!

    Well there's also Liquify under Filters which allows you to warp pixels using brushes. But I would probably stick to Free Transform Warp and/or Puppet Warp for this. It's a bit of a tricky task this and will require some work and the end result is up to your material quality, picture resolution and patience. I know, not much help, but a suggestion to begin with would be to scale the pigeon images so they match each other as closely as possible and then get busy pushing your pixels around. 
    BTW search YouTube on some tips on how to use Free Transform Warp and Puppet Warp. It's always easies getting visual help from a video than trying to explain this with words...

  • Placing an image exactly into the page size everrytime, can it be done?

    hi,
    I know a bit about in design however was wondering:
    can I place an image exactly into the page size everrytime so it copies the exact measurements?
    helen

    Depends on the version of InDesign. Starting in CS3, if memory serves, you can put a frame on the master page and pre-set the fitting parameters, or assign an object style that includes the fitting options.
    Anything you place into such a frame would honor the fitting options you set.
    Peter

  • How do I make iPad images high quality in Photoshop?

    I'm a graphic designer and I tried setting the resolution higher with my 1024 x 768 image but it still looks crappy when viewd on my iPad 2. How do I make the image higher quality when I save it? I use Adobe Photoshop cs4.
    Thanks

    Photos that are synced via iTunes get 'optimised' during the transfer, though I don't know what the optimisation actually does (and it's not possible, as far as I know, to bypass the optimisation when syncing photos). If you want your versions of the photos on the iPad then you could try the camera connection kit, emailing them to yourself (and then saving to Photos), or one of the wifi photos transfer apps - though the photos will only go into the camera roll/saved photos album (at least until iOS 5 which should allow us to move photos between albums)

  • Stil image (photo) quality in rendered project

    I'm having an issue with the quality of photos that I've added to a video. The quality of the still photo exhibits a lot of "jagged" edges along things that should be straight lines. Photos of stone walls look almost like the stones are vibrating. This happens when viewed in the canvas and in the rendered video. I have animated the photos by adjusting the scale and position at various keyframes. The effect was most notable when I used iDVD to create a DVD and viewed the contents on a TV. The photos where brought from Aperture (Brower ->Import Files).
    Settings:
    Timeline:
    VidRate: 29.97 fps
    Frame Size: 720 x 480
    Compressor: DV/DVCPRO - NTSC
    Pixel Aspect: NTSX - CCIR 601
    Anamorphic : (checked)
    Field Dominance: Lower (even)
    Audio Rate: 48.0 KHz
    Aud Format: 32-bit floating point
    Video:
    VidRate: 29.97 fps
    Frame Size: 720 x 480
    Compressor: DV/DVCPRO - NTSC
    Pixel Aspect: NTSX - CCIR 601
    Anamorphic : (checked)
    Field Dominance: Lower (even)
    Audio Rate: 48.0 KHz
    Aud Format: 16-bit integer
    Photos (originally shot in RAW, just copied from Aperture Library via FCE):
    VidRate: 29.97 fps
    Frame Size: 1944x1294
    Compressor: Photo - JPEG
    I'm guessing some of the problems are going from a still image in an interlaced video. I have taken the output video and run in through JES Deinterlacer, but that doesn't solve the problem. I've exported the video with both (Export -> Quicktime Conversion) and the settings suggested in http://support.apple.com/kb/TS1611. Any help in improving the quality of the still images (or any other suggestions) would be greatly appreciated.

    OK, I looked at this export some and I have a question. Why is it that "Export -> QuickTime Movie" and then changing values in another tool the right solution?
    So what I did was:
    1. Follow the steps on http://www.fcpbook.com/Video9.html. Specifically the steps to use QuickTime Pro for this (seems with Snow Leopard and FCE, QuickTime 7's pro features are unlocked).
    2. Followed the guidance http://support.apple.com/kb/TS1611 and used "Export-> Quicktime Conversion".
    It seems that in any case, you are calling the same compression codecs and thus ending at with the same thing. The file sizes are are only about 1kB different and a diff of the data provided from MediaInfo Mac (http://mediainfo.massanti.com/) show only some minor differences in the metadata in the files.
    So what exactly is different between the two different approaches and why is "QuickTime Movie" the better option? i hope this question doesn't come across as combative in any way -- I'm honestly curious about the differences and learning as much as I can about the different processes.
    Thanks for all of your help,
    Chris
    P.S. It looks like Anamorphicizer doesn't work well with 10.6 -- it throws an error and opens QuickTime X. Without tracing system calls or anything, my immediate guess is that it uses Quicktime behind the scenes and hasn't been updated to account for the existence QuickTime X.

  • Saving images with quality 12 increases size. Why?

    A minutely modifyed image saved with quality of 10 maintained its original size (more or less), but saving with a quality of 12 increases size.
    I would like to know why?
    I'd appreciate your comments.

    Because that's how .jpg works. Sizes of less than 10 usually compress your files somewhat, lower numbers resulting in smaller file sizes and more compression, with a corresponding loss of quality. Sizes greater than 10 also use the .jpg compression algorithm, but they choose less compression, and so two pixels that may have been compressed together at compression 10 may now be separated at compression 11 or 12, resulting in a larger file. (But that doesn't imply an increase in the quality, even at 12 there is still a slight decrease in quality in that the .jpg saved version still doesn't exactly represent what you saw on the screen)
    The amount of modification to the photo doesn't really matter in how much the space the .jpg compression takes up.

  • Creating Still Images/High quality JPG's...please help

    We are in desparate need of creating quality still images from captured video. Basically, we have footage in mini DV format that we import into Final Cut HD using a Sony Clamshell deck. What is the best way to make high quality still images?
    We tried it this way....Export Quicktime as Still image...and it just looks bad, Even after De-interlacing it within Photoshop. We need it to look exactly like it does if we simply press pause on the deck or pause within Final Cut.

    When you start with DV material, the highest res you can for a still image capture is 720x480 (non-square) or the equivalent of a really bad quality cheepo still camera. (~640x480 square pixels)
    To avoid the need to deinterlace, (the horrors of which are described below) find sections in your masterpiece that have VERY LITTLE motion, i.e. everyone/thing standing/existing absolutely still with the camera locked down on a tripod. These sections will yield the very best possible still images.
    If you have motion the frame, the still images will exhibit 'tearing' which comes from the two fields of video being recorded ~1/60 second apart. The second field shows elements displaced from the first field - hence a kind of internal image shifting going on. The only real way to deal with these kind of images is to deinterlace them - that is - decide which field you are going to keep and throw away the other.
    When you have deinterlaced the image, you have in effect reduced it from a 720x480 image to a 720x240 image. The image pixel count REMAINS 720x480 but with half the vertical information as the remaining lines are doubled or interpolated to build back to 480 lines.
    Confused yet? I hope not...
    Still, if you know you'll want stills from a project, it's better to carry around a small digital still camera, you'll get better results. Otherwise, plan on pretty small prints.
    Good luck.
    x
    Do your part in supporting your fellow users. If a response has been Helpful to you or Solved your question, please mark it as such as an aid to other lost souls on the forum.
    Also, don't forget to mark the thread Answered when you get enough information to close the thread.

  • Image resize quality not saved in Save For Web Preset?

    L.S.,
    I have been working on an Action to create 12 separate PNG's for further use in my icon-software. What I do is basically this:
    Start out with a 256x256px image, save it for web four times in steps from 256, 48, 32 and 16px, (32-bit)
    apply some masks / layers, save it for web again four times from 256-16px,  (8-bit)
    set the mode to indexed colour / apply other masks and layers, and save it for web again from 256-16px. (4-bit)
    I let the sizing be done in the Save for Web step.
    The 256-sized pics give me no problem as they are not scaled down.
    The 32-bit pics give me no problem as they are scaled down, and antialias comes along, but they have 8-bit transparency: no problem.
    In the 8-bit and 4-bit versions from 48-16px, I definitely want no antialias in the sized down versions. So I set the resize quality to 'Nearest Neighbor'.
    However, Photoshop seems not to be able to remember different resize qualities in one Action.
    In every Save for Web-action that I do, when I set it to resize, it does so. But the resize quality is not taken from the setting. It seems to be taken from the last used or recorded setting.
    That is pretty annoying, because:
    - When I let the downscaled ones of the first 4, be downscaled 'bicubic sharper'
    - And I let the downscaled ones of the second 4, and third 4, be downscaled 'nearest neighbor'
    Next time I run the Action, all downscaling is done by 'nearest neigbor'. That ruins my downscaled 32-bit icons - they get jaggy.
    If I alter the steps in the action manually and set the first ones to 'bicubic sharper' again, suddenly all downscaled ones are done by 'bicubic sharper'.
    That ruins my downscaled 8-bit and 4-bit icons: they get antialias, to lime green, and that shows...
    I can save Presets in the Save For Web dialog, but as the groupbox already tells me: these presets only apply to the Image quality parameters, not to the sizing parameters. Those seem to be taken from the last run.
    I have one alternative to this: before every Save for Web step, I downsize the image myself, and undo the history step after saving. It's quite some work and I would just rather see that the sizing parameters saved in the Preset!

    That's never been saved as part of the S4W presets. Bloody annoying.

  • How to keep the same image quality after OCR ?

    Hello, I have scanned a page of a book, it has mostly black text over white background. In order to keep a good visual quality and a low size for the file I chose the GIF format. The gif is 153.4KB, when I save it as PDF the file is 119KB  and after OCR it is 224.2KB (however the resulting rich text is only 3.6KB).
    How come the PDF is smaller than the GIF ? Is the GIF converted into a JPEG ?
    How come after OCR the PDF is twice bigger while the added text is only a few KB ? Is the image converted again ?
    I only want to keep my GIF as it is and OCR it.
    Is it possible ?  Even though in the "Convert to PDF" settings it says "There are no settings that can be edited in Conpuserve GIF"
    If not, what other software could I use ? I have DEVONthink Pro but it also converts my GIF against my will.

    niuza wrote:
    Why are you talking about DPI ? the problem is not with acquisition but with convertion.
    I don't want more detail I want to keep the same quality in the PDF that I had in the source file.
    PjonesCET wrote: you can always use pdf Optimizer (in advanced menu) to reduce the size of the PDF without demishing the quality of  text.
    Well I'd like to see that, because before optimizing a PDF you must save it and when you save it the image is converted.
    Here is what I get with a 600DPI TIF converted to PDF. Notice the difference.
    Because DPI (Dots per inch) density affects the quality of the OCR.   The higher the dpi, the better looking and more reliable the OCR. The less DPI is , the poorer the quality, and less relaible the scan is. There are different settings under the Create PDF Using Scanner Those setting affect the Quality of the OCR.
    Once the Document has been OCR'ed have you tried to save as a Word Document or as an RTF Document? Then sleceted all the text and choosen a Font (Arial for example) save as a word document. Then created a new PDF. It might clean up the look at the text.
    I will leave the answer at this and let someone else try. I don't wish to get anyone upset.

  • Library. Image Preview Quality & weird behaviour.

    Hi. I'll try to explain this the best I can.
    I never noticed this with Lightroom 2.4 ... I don't know if this has always happend but went unnoticed or is just happening after Lightroom 3 was installed.
    You browse the library, the thumbnails, as usual and the first time you click a thumbnail you get a full screen preview, wich quality I suppose is controlled by the 'standard preview quality' setting.
    Once in full screen, click and you get the 1:1 preview (or any other zoom factor, but I chose 1:1). This one is generated previously or as needed.
    I have the standard preview setting at maximum size, maximum quality.
    When I click on a thumbnail to open the full screen preview I get this preview (just cropped and zoomed part to explain what I'm seeing) :
    Notice the jagged lines everywhere ? It's noticiable at normal full screen as somewhat deformed noses, eyes, etc. Here I enlarged it to make it even more evident.
    Well ... then I click to get 1:1 zoom, wich prompts lightroom to generate a 1:1 preview. Once done, I click back to the normal full screen preview. Now, this is the image I see :
    See the change ? Now is softer, there's no jagged lines. No deformed eyes, just a good an clean image.
    Well ... If I close the full screen preview and then I open it again, the image I get is not this second softer version already generated by lightroom after the 1:1 preview ... no, I get the first picture again, the jagged one.
    Even more, If when I'm in thumbnails mode I click an image to get the full screen preview, and then browse the next images in this full screen mode... every image is jagged like this ... BUT, If I first click to enter the 1:1 preview and click back to standard preview ... then ALL the remaining images show the softer version, not the jagged one.
    This only happens in library module, not in the develop module.
    Was this behaviour always present ? Is this a bug ? Is it by design ?
    (I hope I've explained it well enough, it's confusing and english, as you might have noticed by now is not my best language.)
    Any question, help or comment is appreciated.

    I know and hate this bug since version 1.0. It's still there in LR3. And I taught myself how to avoid it.
    You run into this bug when you switch from Grid to Loupe and back by double-clicking the mouse. To avoid it, you should either:
    Use the keyboard shortcuts instead of mouse (Enter, Esc and Z);
    or, in Loupe mode, to switch to Grid, double-click the gray area around the photograph instead of actual image area.
    Steps to reproduce the bug:
    Scenario 1:
    1. Go to grid view.
    2. Double-click a thumbnail to go to Loupe Fit.
        The image is smooth.
    3. Double-click on image (not gray area) to go back to Grid.
    4. Double-click a thumbnail to go to Loupe Fit again.
        The image is jagged.
    5. Zoom to 1:1 and back to Fit.
        The image is smooth.
    Scenario 2:
    1. Go to grid view.
    2. Double-click a thumbnail to go to Loupe Fit.
        The image is smooth.
    3. Double-click on gray area surrounding the image (not the actual image area) to go back to Grid.
    4. Double-clcik a thumbnail to go to Loupe Fit again.
        The image is smooth.
    5. Zoom to 1:1 and back to Fit.
        The image is smooth.

  • Image attachment quality in Mail

    When I drag an image into a new message, and resize it using the "Image Size" option in the lower right corner of the window, the quality is very poor. No anti-aliasing is being performed, so the resized image looks very pixelated.
    To reproduce this yourself, create a new message and drag in a large image. Resize it to small using the Image Size drop-down. You should then see what I'm talking about.
    I first assumed this was a preview of some sort and that Mail would use a better method to resize the image when then email was actually sent. So I tested this by sending an image to myself, and it was still pixelated.
    I've browsed and searched through the FAQs and Discussions and can't find an answer to this problem. I can't find any preferences in Mail or in the New Message window regarding 'image quality' either.
    So, my question is, is there an option to increase the quality of image attachment resizing? I realize I can resize it in an outside application, but that's an extra step I'd rather not have to do...especially since the option is right there in Mail, which is genius.
    Any help or insight is most appreciated. Thanks!
    -Kurt
    p.s. This is Mail 2.1 (752/752.2)
    Powermac G5   Mac OS X (10.4.8)  

    The appearance will vary depending on the picture content, but if you start of with a big picture (3000x2000 pixels) and reduce it to 320x240 you are throwing out a lot of pixels. I would suggest going for the medium (640x480) or even the large (1280x960) if you want to preserve any picture quality.
    When I am sending a picture I generally use scaling in GraphicConverter first.
    AK

  • Acrobat 9 Pro - Change Image Compression/Quality w/ Javascript?

    Hello,
    I've got a folder-level script that will merge PDFs by using "insertPages" to add a list of PDFs to a new document. The file size is too large, and I've noticed that if I go into the PDF Optimizer manually and change the Image Settings so the compression quality is "low" for both Color images adn Grayscale images, it brings the file size down to where I need it to be.
    My question is, can I do this in my script somehow? I don't see anything in the Javascript Reference or Acrobat SDK about updating images within document. I'd really like to automate this optimization within my script.
    Has anyone ever done this before?
    Many thanks in advance!    

    Thanks again for your reply!
    Well the output folder could change... and really, it would be most ideal to not even save the single PDFs (although it looks like I might have to). All I need to accomplish is to take a folder of PDFs and merge them into one PDF (but it must be optimized and cropped). I don't need the new, saved versions of the single PDFs. I feel like my "automation" solution to this will end up being more tedious than manually clicking "combine" and merging PDFs that way, and applying the crop settings afterward. Maybe I am making this more difficult than it needs to be? Here is what I'm doing:
    1. I have a folder level script that executes when Acrobat loads whichs adds two menu items. The first opens up the Batch Sequences menu so the user doesn't have to find them.
    2. From here, the user will need to run a batch sequence that will optimize and crop each PDF and save them to an output folder.
    3. Now the user will need to run another batch sequence (and choose the output folder from step 2 as the input folder now) that will collect all the file paths of the optimized PDFs.
    4. Now the user can use my other menu item from step 1 that will execute a folder level script and merge the PDFs by inserting them into a new document.
    5. I can prompt the user here to save the new document somewhere.
    It seems like there should be an easier way to do this. Is it not possible to merge the PDFs in a batch sequence and then use the output folder to save the one new PDF (rather than each of the input PDFs)?

  • Saving a Project In Pages, Images Poor Quality in PDF

    I have just purchased IWork 09, i am new to pages and so far find the software very handy and easy to use to make very attractive documents.
    i have one dilemma though.
    i save a (e.g.) poster in pages, but when i go to open it in PDF Preview, the images that i have inputted into the document are distorted or of a poor quality, when i open them in pages they are fine?.... also when i go 2 open the saved document in Adobe Reader 9, it wont open? i tried taking frames and shadows etc out just in case they are not supported, like in MS Word but still no luck. i have to send flyers for advertizment via e-mail to clients who are more than likely using Windows Xp or Vista.
    if some one could help with
    1. Getting my images to stay good quality
    2. opening it in adobe 9
    it would be very much appreciated
    jimbest

    You can not see which programs have been involved in each compression, but as far as I know, at least some programs store information about both themselves (last edited by) and the compression level in the exif/iptc data.
    The right way to work is to get as much data as possible off the colour capture in high bit and do the exposure correction getting into a normalised colour space either still in high bit or in scaled down 8 bit to save space.
    The pixels should be left alone from this phase and out. It is not necessary to change a single source pixel either in matching to the display, the printer or the press. All of this is done on the fly in memory and not be changing the pixels in the disk-based image document.
    As interchange file format for corrected colour captures stored in the normalised colour space, either use a linearised RGB space or high-bit CIELab. The file format should be lossless TIFF and not lossy JPEG (the way JPEG works internally is lossy by definition).
    Live Picture and Apple Aperture are alike in the way they work. Rather than taking the pixels into memory for correction, which is they way Photoshop worked, they construct a colour managed proxy for manipulation and then provide for a rendering that is non-destructive.
    Live Picture required ColorSync 2 and Apple Aperture is unable to work without colour management for the same set of reasons. Apple Pages is unable to work without colour management for the same set of reasons, although there are additional reasons for Apple Pages.
    Just my ten cents,
    /hh

  • Encoding still image files - quality problems

    The project is almost over, and of course problems are now rearing their ugly heads.
    The .pct files I'm using for the menus seemed to encode just fine last week, with very little noticeable MPEG2 compression. Now however, without any changes to the files, the menus look terrible with blocks and noise all over the place! What the heck happened?
    Here's a before and after comparison screen grab:
    Before: http://www.moonbase9.com/images/apps/dsp/menu_before.png
    After: http://www.moonbase9.com/images/apps/dsp/menu_after.png
    The settings in DSP for MPEG2 encoding are the same for both shots. All my video material is pre-encoded .m2v files and AC3 audio. The only part I'm allowing DSP to encode are the .pct menus.
    I've also tried encoding the .pct files in Compressor and the same thing happens, no matter what quality setting I use.

    Hi There
    Trty deleting the mpegs that DVDSP created for the menus as well as your DVDSP prefs, change the encoding preferences by say 0.5Mbps, let DVDSP encode again see what happens.
    Cheers
    B

Maybe you are looking for