PDF viewer for large files

Hello,
i want to view a very large PDF file (~1GB, ~500 pages). The viewers i tested (okular, evince, xpdf) are slow: if i browse through the document one page loads a specific time and is displayed. If i switch to the next page i have to wait again.
Is there a viewer that loads the rest of the document while i read the first pages to scroll through the document without waiting after a while?
Last edited by oneway (2009-05-12 19:28:07)

Thanks so long.
acroread renders the pages very fast. That's what i want - but it's proprietary:/
For the moment it's ok but i'd like to have a free alternative to acroread that shows the pages as quick as acroread or preloads more than 1 page.
lucke wrote:Perhaps, just perhaps it'd work better if you copied it to tmpfs and read from there?
I've tried it: no improvement.
Edit: tried Sumatra: an improvement compared to Okular etc. Thanks.
Last edited by oneway (2009-05-12 21:50:10)

Similar Messages

  • Need PDF Information for large files

    Hi experts,
    I need some information from you regarding PDF related.
    is there any newer adobe version for large files support - over 15mb, over 30 pages.
    is it possible Batch technology with PDF
    if not please suggest me Possible adobe replacements.
    Thanks
    Kishore

    Thanks so long.
    acroread renders the pages very fast. That's what i want - but it's proprietary:/
    For the moment it's ok but i'd like to have a free alternative to acroread that shows the pages as quick as acroread or preloads more than 1 page.
    lucke wrote:Perhaps, just perhaps it'd work better if you copied it to tmpfs and read from there?
    I've tried it: no improvement.
    Edit: tried Sumatra: an improvement compared to Okular etc. Thanks.
    Last edited by oneway (2009-05-12 21:50:10)

  • How can I quickly view pdf files like I can do with Windows Picture and Fax viewer for jpg files?

    How can I quickly view pdf files like I can do with Windows Picture and Fax viewer for jpg files? I need to look at several thousand PDF files. It takes too long to open each one individually. The only thing I could think of is combining them into large groups and then using the Navigation index. But I like the way windows Picture and Fax Viewer does it because you can keep the files separate. Combining PDFs causes loss of individual file names. That would be a problem since I do need to have the individual file names.

    Windows Picture and Fax Viewer is a DLL and is started via a rundll32.exe call and can't be set as an application to handle images in Firefox 3 and later versions.
    Try to set Windows Picture and Fax Viewer as the default viewer in Windows, then it should be listed automatically in the Mozilla Firefox Browse dialog.
    *http://www.winhelponline.com/articles/115/1/Windows-Picture-and-Fax-Viewer-as-the-default-viewer-in-Mozilla-Firefox.html
    '''If this reply solves your problem, please click "Solved It" next to this reply when <u>signed-in</u> to the forum.'''

  • PDF viewer for MBP

    I'm looking for a good PDF viewer for my MBP, any recommendations?  Preferably freeware.  The current one I'm using doesn't display PDF files properly in the browser.

    tanvivien wrote:
    Sorry if I didn't make myself clear. I'm actually looking for a plug-in to allow me to view PDF files when using Firefox.  I had one installed earlier but somehow it doesn't allow me to preview PDF contents in Firefox any more.  Anything to recommend?
    Did you review Firefox's pdf plug-ins?  https://addons.mozilla.org/en-US/firefox/search/?q=pdf&cat=all

  • Thumbnails for large files

    OK..so now I've learned that for my large panorama filesI wont get a thumbnail in the Organizer. Just some Blue Triangle warning with a exclamation point. I do allot of golf course work where I stitch phototos and print 38 x 11 inches @ 300dpi.
    So whats the work around? Is this a problem in the Bid Photoshop program? Sure I can....copy the file...reduce the resolution..save it with a different name..etc.
    What the heck!  You can't tell me that the program writers at Adobe can make this work for large files!  Why the limit?  By the way...what is the limit?
    Thanks

    By the way...what is the limit?
    http://kb2.adobe.com/cps/402/kb402760.html
    Juergen

  • How to create thumbnail view for html files

    hi,
    I want to create thumbnail view for html files, not for image files.. can we treat html files as images..
    Anybody help me..

    You can right click on your Desktop and select New Folder.  In Finder File > New Folder should work too, not in front of my Mac.
    Welcome to back by the way.  You might find these websites helpful.
    Switch 101
    Mac 101

  • Faster alternative to cfcontent for large file downloads?

    I am using cfcontent to securely download files so the user
    can not see the path the file is stored in. With small files this
    is fine, but with large 100mb+ files, it is much, much slower than
    a straight html anchor tag. Does anyone have a fast alternative to
    using cfheader/cfcontent for large files?
    Thanks

    You should be able to use Java to handle this, either through
    a custom tag or you might be able to call the Java classes directly
    in Coldfusion. I don't know much about Java, but I found this
    example on the Web and got it to work for uploading files. Here's
    an example of the code. Hope it gives you some direction:

  • Explorer crashing when generating thumbnails view for large (4GB) video (.TS) files

    I am suffering consistent application crashes in Windows Explorer when it tries to generate thumbnails. 
    The error is reported as COM Surrogate has stopped working with an APPCRASH of dllhost.exe in module msvcrt.dll.
    This only but always happens in the following circumstances.  A folder set to view tiles or icons (with thumbnail view option NOT disabled).  This folder contains several sub-folders, each of which contain video files.  As explorer scans the
    folder and its sub-folders, generating thumbnails for each, whenever it encounters a video file (.TS) that is 3.99GB or larger. COM surrogate crashes again.  Closing COM Surrogate will then allow explorer to continue to scan the remaining sub-folders
    until it encounters the next large file. 
    If I subsequently open one of the sub-folders where it crashed trying to generate a thumbnail, and refresh the folder, it will successfully generate a thumbnail for the video file. If I then return to the main folder and change the customization of the sub-folder
    from video to document and back to video, explorer then successfully generates the folder icon with the inset video thumbnail.
    I have tried this in safe mode and the problem persists.
    Can anybody please help as I am pulling my hair out.

    Hi Roger.
    Almost all of my media files are TS, and all of my large (4GB) files are TS files, so I cannot tell if other file types have the same problem.  I did try renaming the TS files to be .mpg (they still play as mpg) but this did not help.
    However, I have downloaded and used ShelleExView and this has proved very useful.  For the benefit of anyone else with a similar problem and using this very useful tool, I will detail the technique I used. 
    First, using ShellExView, I disabled various groups of extensions to identify which, if any, were relevant.  After disabling/enabling each set I performed a system restart to ensure the changes took effect.  This was not actually necessary as restarting
    Windows Explorer (via Task Manager) works equally well (but this loses my customm Taskbar toolbar). 
    Also, before re-opening the folders containing the problem files, I deleted the icon cache (I could have done this manually but chose to use CCleaner's delete Thumbnail Cache option as this was quick and easy).  I then viewed the folders containing
    the problem files, watching to see what effect the changes in ShellExView had made.
    Using ShellExView, I progfessively disabled by Type (e.g. Thumbnail, Icon Handler), then by Company (e.g. non-Microsoft), then by Product (e.g. MS Office, Windows Search).  The result was that I identified the extension that was creating the thumbnails:
    Icaros Thumbnail Provider (installed as part of the K-Lite codec pack).
    Hoping I had found the cause, I removed K-Lite from my system and checked again.  No thumbnails (as expected), but COM Surrogate continued to crash (I have subsequently reinstalled K-Lite as I need this to play my media files).  Further analysys
    with ShellExView then identified a further problem extension: Microsoft's MF MPEG Property Handler. 
    With both extensions (Icaros Thumbnail Provider & MF MPEG Property Handler) disabled, COM Surrogate does not crash.  With either one enabled, but not the other, COM Surrogate crashes.  These two extensions are therefore both individually causing
    the crashes.  This leads me to wonder if they perhaps have something in common (e.g. a dll).
    Although disabling these extensions stops the crashes, I of course alos lose what they provide: file details (e.g. media length etc.) and thumbnails.  So simply disabling them is not a solution.  
    Is it possible to idenitfy what, if anything, they have in common (which may be the true culprit)? 
    Alternatively, are there other ways I can continue to obtain file property information and thumnails without using these extensions?

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • FTP Sender Adapter with EOIO for Large files

    Hi Experts,
    I am Using XI 3.0 version,
    My scenario is File->XI->FIle, I need to pick the files from FTP Server , there are around 50 files in it each of 10 MB Size.
    So i have to pick the files from FTP folder in the same order as they put into folder , i,e.., FIFO.
    So in Sender FTP Comm channel , i am Specifying
    Qos = EOIO
    queue name =  ACCOUNT
    whether the files will be picked in FIFO into the queue XBTO_ACCOUNT?
      So my question is,
    what is the procedure to specify the parameters so that it will process in FIFO?
      What would be the BEST PRACTICE to acheive it for better performance point of view.
    Thanks
    Sai.

    Hi spantaleoni ,
    I want to process the files using FTP protocol which should be in FIFO ,
    ie.., Files placed first in a folder should be picked up first and remaining one next and so on.
    So if i use FTP ,
    Then to process the files in sequence i.e, FIFO
    the processing parameters
    Qos= EOIO
    Queue name = ACCOUNT
    would process the files in FIFO.
    and to process large files (10MB size) what would be the best  Polling interval secs.
    Thanks,
    Sai

  • Why does the Reduced Size PDF command create larger files?

    I have been a happy user of Adobe Acrobat vers. 7 until Adobe ceased the support of that version and I was unable to print and do other activities since my last Mac OS vers.10.6.8 update, so I was forced to upgrade to Adobe's Acrobat's X latest version 10.0.0.  I had hope to be able to produce Adobe Acrobat 3D files but found out later that Adobe does not do that anymore and I would have to buy it as a plugin from Tetra 4D for another $399.00, oh well. I then wanted to reduce the file size of some larger pdfs I had edited.  It took a while to locate the command location, now in File>Save>Reduced SizePDF, once I activated the command on my file it actually increased its size!!  Why did this happen?  Should I continue to use my disabled, Adobe unsupported vers 7 to get the superior performance I once had?  This issue was mentioned in an earlier thread that started out as to where to locate this command but the resultant larger file issue remained unanswered.  Currently unhappy on my purchase.  Will Adobe offer a free upgrade to fix this?

    I agree with Peter. Use Pages '09, but keep a document migration strategy present. When you create a Pages document in Pages '09, export it to Word .doc too. Export any current content in Pages v5.2.2, and then stop using it. That means no double-clicked documents.
    Pages v5 is not an evolution of Pages '09 — it is an incomplete rewrite, and uses a different, internal document architecture than did Pages '09. This internal architecture is to blame for the export translation bloat into Word's different document architecture. It does not matter whether you type a single character, or multiple pages, you will get about a 500Kb .docx file.
    Opening, and resaving this monster .docx file in MS Word in Office for Mac 2011, LibreOffice, or TextEdit will result in a very small .docx footprint.

  • PDF viewer for Firefox?

    Does anyone know how to view pdf files from the firefox browser? I downloaded the pdf viewer and it works with safari, but I usually use firefox.....
    Thanks

    moscowotters, the extension you referred to doesn't work as advertised on the Mac version of Firefox: it brings up the dialog as expected, but if you choose "Open PDF," the PDF still launches Adobe Reader or Preview, rather than opening the PDF in the browser.
    Sadly, PDF viewing in the browser appears to be limited to Safari for the time being.

  • PDF viewer for Firefox 4.0b7

    I know that the Firefox PDF Plugin for Mac OS still doesn't work for the newer versions of 4.0b. I want to know if this problem bothered anyone enough so that they went and found a workaround of some sort till the PDF plugin does come out. Also the Google Docs reader doesn't do me any good so that suggestion would probably be useless.
    Before the update to 4.0b7, I was running a patched version of the PDF Plugin but the patched version broke, as expected.
    Also as a side note, video playing is broken in the new update.

    Nah, I have a PDF printer installed for that. I am trying to view PDFs without downloading the file to my computer. Firefox had a plugin for that for the previous betas and I wanted to check if there are any workarounds or if there is a patched version available somewhere.

  • Reinstate Preview as PDF viewer for Safari

    I am trying to get Preview to work as the default PDF viewer in Safari. I have deleted the Adobe Reader and Silverlight plug-ins from Library/Internet Plug-ins. I selected Preview as the default app for PDFs (all instances) in the info pane of a PDF. I restarted Safari and rebooted the Mac. Now it seems that Safari has no ability to view a PDF inline; it immediately downloads any PDF I click on and opens it with Preview.
    I looked for a way to reinstall Safari, although I'm not sure that is a solution, but did not find a Safari download on the Apple website. Do I need a plug-in or should the functionality be built into the browser? 
    How do I get a PDF file to display as a web page in Safari using Preview?
    Thanks -
    **OS X 10.10 Yosemite on a Macbook Pro Retina 13"

    HI,
    Right or control click a PDF file on your hard drive. Then click: Get Info
    In the Get Info window click the pop up menu where you see: Open with.
    Select Preview. If Preview isn't available from the list click: Other
    Navigate to the Preview app and select it then click the Change All button.
    Carolyn

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

Maybe you are looking for