How to download a web page intact as one file, which can be easily done with Safari

On Safari all I do with a complex web page is 'Save As (whatever I wish, or its existing description. That shows as a complete webpage in a single file. Firefox saves it as a file PLUS a folder of elements. Takes up twice my desktop real estate, and makes going back too complex. Chrome has a sort of fix which now only works occasionally. Am probably overlooking some kind of Ad-on, but have yet to find it. Any thoughts?

Safari saves the web pages in the ''web archive'' format. Basically, it rolls every element of the web page into a single file. It provides the convenience of having everything in a single file but it may not necessarily mean that it saves space. Note that this format is not very portable - other browsers like Internet Explorer cannot open this web archive file.
<BR><BR> Since Firefox saves the web page and its associated elements separately, it can be opened in any other browser. To allow saving web pages in web archive format from Firefox, you can try this add-on: [https://addons.mozilla.org/en-US/firefox/addon/mozilla-archive-format/ Mozilla Archive Format].

Similar Messages

  • How do I re-sequence pages in a pdf file, which is in booklet format?

    I have a pdf file which is an A5 booklet in duplex for-printing order, with two A5 pages printed on one A4 area.  I want to re-sequence these pages to simple ascending page number order, for reading online.  How do I do it?
    The first A4 page of my pdf file consists of A5 page 1 on left and A5 page 48 on right, followed by the second A4 page consisting of A5 page 47 on left and A5 page 2 on right, and so on.  [I can print this out OK as a booklet, except that the second A4 page, etc. is upside down.]
    Do I need to take the crude, grunt approach and cut and paste each A5 page, one at a time, into a new document?  I am using Acrobat XI Pro

    You can reorder pages by dragging them to new positions in the Pages pane on the left.

  • In Internet Explorer I was able to right click and load the page as a PDF file. Can I do this with Firefox?

    I have searched everywhere for this. When I go to "Tools" then "Options" then "Applications" I see that Adobe Acrobat is enabled however I do not see anything on Firefox that allows me to use it to convert a web page over to a PDF.

    It does not ''''convert a web page over to a PDF.''''
    If you have a link to a pdf just open it as you would any other web page or link. It will open as a webpage displaying as the pdf document that it linked to.
    * see [[Using the Adobe Reader plugin with Firefox]]
    * click this link to test --> http://plugindoc.mozdev.org/testpages/pdf.html

  • How to read a web page as an ascii file

    I need to open an url and read the contents, as it
    were a simple local ascii file. This give me just
    garbage, as [B@fd2e84b (that change every time,
    seems to be a memory address, not the contents..).
    <pre>
    import java.io.*;
    import java.net.*;
    try {
    URL j_urlobj = new URL("http://www.google.com";);
    URLConnection j_urlcon = j_urlobj.openConnection();
    BufferedInputStream j_bis =
       (BufferedInputStream)j_urlcon.getInputStream();
    byte[] j_data = new byte[4096];
    int j_size;
    j_size = j_bis.read(j_data, 0, 4096);
    while (j_size != -1) {
    out.print(j_data);
    j_size = j_bis.read(j_data, 0, 4096);
    j_bis.close();
    } catch (Exception e) { out.print(e.toString()); }
    </pre>
    What is wrong?? Thank you!
    Stefano

    I try your code and it's work right with a little change....
    The problem is that you read bytes, so if you want something readable try to convert to a String.
    String s = new String(j_data)
    out.println(s);
    Here, the complete code:
    try {
    URL j_urlobj = new URL("http://www.google.com");
    URLConnection j_urlcon = j_urlobj.openConnection();
    BufferedInputStream j_bis =
    new BufferedInputStream(j_urlcon.getInputStream());
    byte[] j_data = new byte[4096];
    int j_size;
    j_size = j_bis.read(j_data, 0, 4096);
    while (j_size != -1) {
    out.print(new String(j_data));
    j_size = j_bis.read(j_data, 0, 4096);
    j_bis.close();
    } catch (Exception e) { out.print(e.toString()); }

  • How to prevent duplicate web pages from loading

    <blockquote>Locking duplicate thread.<br>
    Please continue here: [[/questions/930219]]</blockquote>
    how to prevent duplicate web pages from loading

    <s>Hi berternie, can you describe this in more detail?
    Are you saying the identical page loads in two different tabs? When does that happen -- when you click a link? or when you use a bookmark?
    Or do you have multiple tabs open every time you see your home page (i.e., when you start up, open a new window, or click the home icon)?</s>
    I see, you have more info in this thread: https://support.mozilla.org/en-US/questions/930219

  • Where to Download WPC [ Web Page Composer ] and How to install it ?

    Hi Experts,
    I need to download the Web Page Composer and install for our use in my company. Can anyone help me
    on this where to get it and and how to install ?
    thanks
    Suresh

    Hi,
    Chech the SAP Note Number: [1080110 |https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_ep_km/~form/handler]
    Also some links that may help you:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d07b5354-c058-2a10-a98d-a23f775808a6
    There are also lots of documents available on SDN, so just use SDN search.
    Regards,
    Praveen Gudapati

  • Download a web page, how to ?

    Can any one help me with code for downloading a web page given the url address ? I can download the page, but the problem is it doesn't download the associated images, javascripts etc. neither does it create an associated folder as one might expect when saving a page using browser.
    Below is the code snippet -
                        URL url = new URL(address);
                        out = new BufferedOutputStream(
                             new FileOutputStream(localFileName));
                        conn = url.openConnection();
                        in = conn.getInputStream();
                        byte[] buffer = new byte[1024];
                        int numRead;
                        long numWritten = 0;
                        while ((numRead = in.read(buffer)) != -1) {
                             out.write(buffer, 0, numRead);
                             numWritten += numRead;
                        System.out.println(localFileName + "\t" + numWritten);

    javaflex wrote:
    I don't think web crawler would work
    webcrawler simply takes every link or url on the given address and digs into it ..
    Would it work for javascripts ? Given a url like xyz.com/a.html,
    1. the code above would downlod the plain html.
    2. parse html to find javascripts, images (anything else I need to look at ?)
    3. download those
    4. put everything in one folder (but the question is then do I need to rename the pointers in the dwnlded html to point at the other contents on the disk ? )
    This is naive approach - anything better ?
    thanks.More advanced web-crawlers parse the JavaScipt source files (or embedded JS sources inside HTML files) and (try) to execute the script in order to find new links. So the answer is: yes, some crawlers do. I know for a fact that Heritrix can do this quite well, but it is a rather "large" crawler and can take a while to get to work with. But it really is one of the best (if bot the best) open source Java web-crawlers around.

  • I recently downloaded a web page but then i deleted history which deleted the web page. there is still a short cut to it but says file does not exist. can i retrieve this as web page no longer exixts

    as the question states i downloaded a web page but before i put it on a memory stick i changed the options on my firefox and the file is no longer there.
    there is a shortcut in the 'recently changed' folder in windows explorer but when i click on it it says the file has moved or no longer exists.
    Is there anyway to retrieve this as the web page no longer exists

    Try:
    *Extended Copy Menu (fix version): https://addons.mozilla.org/firefox/addon/extended-copy-menu-fix-vers/

  • How to open a web page in JFrame.

    Please let me know how to open a web page in the Java Frame.

    HTML code can be viewed in any Swing component you want.

  • How to include non web pages to the "Create PDF from Web Page" feature?

    In Acrobat Pro (v. 10), when I use the "Create PDF from Web Page" feature, it works great for html pages, but it skips non-html links (doc, pdf, ppt, xls, etc). I need Acrobat Pro to convert those files and put them in the order as well. I don't see an option for this in settings. Is there ANY way I can do this? This is for an archiving purpose and I have 10,000 plus files to convert. Please help.

    This is a question i'm trying to answer too. My issue is that I have a PDF file which itself contains links to both DOC and PDF files. The end result is that I need one consolidated PDF containing all the linked files (in order).
    I can run the "create from web page" on this PDF file, and it'll download them, but not convert them. It just adds them as "jumbled" text to the end of the document. I need it to download, convert, and then append them.
    So, as isunshine3 asked above, any way to have Adobe convert the files that it finds linked when running the "create from web page"?
    THanks
    Matt

  • When downloading a Web page, Firefox includes ALL information on site, plus the Web page. I am forced to eliminate Firefox and return to Safari for saving individual page.

    For further info, I have Safari as my Mac provider for the internet, and although I click on it, Firefox takes over when I wish to download a Web page. It saves two files--a folder, in which all kinds of massive-like data will appear, and a file with some of the information I wish to save from a page or two.
    I've tried to select only the page I wish to save in several ways. However, nothing has worked until I take Firefox out of "Applications" and put it in the trash basket.
    Ideally, I'd like Safari to be my default provider, and I'd like Firefox to be my secondary provider.
    Please advise.
    Arline Ostolaza
    [email protected]

    Did you try to save the page as "Web Page, HTML only" instead of "Web Page, complete", if that is what you want ?

  • Can we download a web page using SOA?

    Hi,
    We have a requirement to download a web page whether html or xml format by using SOA middleware.Is it possible with this? Have anyone tried with this earlier? Any suggestions regarding this will be of great help.
    Thanks,
    SOA Team

    Hello Iyanu_IO,
    Thank you for answering my question "can I download a web page as a PDF".
    I downloaded and installed the add-on and it works really well.
    Regards
    Paul_Abrahams

  • Adobe air how do links between web pages

    Hello!
    I try to use adobe air with dreamweaver : The tutorial shows how to package a web page but I can not integrate multiple pages. The html code <a href="page2.html"> test link </ a> is incorrect? Should I add something in the xml file?
    Thanks for help.

    It's just basic page for test :
    <html>
    <head>
    <title>AIRHelloWorld</title>
    <script>
    function init()
    runtime.trace("init function called");
    </script>
    </head>
    <body onload="init()">
    <div align="center">Hello World</div>
    <div><a href="page2.html">test de lien</a></div>
    </body>
    </html>

  • How tall is a web page?

    So we're doing pages 900+ pixels wide these days, but I'm wondering how much vertical content we can use before the user has to start scrolling - what he/she first sees when the page loads. Is there a rule of thumb (or mouse finger) for this?

    It's true that you need to make an educated guess about the people who will be viewing the site.
    For instance, if you know for a (relative) fact that they all use 12 inch monitors and IE6, that gives you an idea of what to shoot for.
    If you have a sense that they all have wraparound monitors (surely someone will invent them) that are 36 inches on the diagonal and the latest, greatest Browsers, that tells you something else.
    You can get general statistics about Monitor usage from w3c http://www.w3schools.com/browsers/browsers_display.asp Allowing for a vertical scrollbar and top of the Monitor screen material (toolbars, etc.), you can kind of guess at the actual Viewport sizes.
    If you want to keep people on your page, and not scrolling to kingdom come (how tall is a web page? how high is the sky?), there are strategies:
    Make the header stable and the container below it scrollable within a container so it looks like it's all on the screen...
    Make the page short
    Make the page liquid, so it doesn't hang off the page on small monitors or cower in the corner of larger monitors
    Use Spry Tabbed Panels, or Spry Accordions, or other Spry content techniques
    You can imitate Browser/Monitor Viewport sizes by doing Window > Cascade and adjusting the size of the floating window manually (drag the right bottom corner). The size of the Viewport will show at the right side of the Tag Selector at the bottom of the Document Window (1038 x 628, for instance).
    Beth

  • Why the speed of MacBook Air to download a web page is lower than iPad2? Even worse sometimes Ipad2 can but MBA can't load the web pages.

    Why the speed of MacBook Air to download a web page is lower than iPad2?
    Even worse ,Why sometimes iPad2 can but MBA can't load the web pages?

    Why the speed of MacBook Air to download a web page is lower than iPad2?
    Even worse ,Why sometimes iPad2 can but MBA can't load the web pages?

Maybe you are looking for

  • Methods in bapi used to convert data into XML

    Methods in bapi used to convert data into XML, how to implement those also, points will be rewarded Thank you, Regards, Jagrut BharatKumar Shukla

  • Self join with fact table in Obie 10G

    I am a newbie to obiee.I have a development requirement as follows- I need to find supervisors designation with the existing star RPD design. explanation is below                                                     DIM_Designation(Desig_Wid)         

  • How do i get photos from phone to computer

    how do i get photos from phone to computer

  • How-to: Disable syncing of document library for individual users or groups

    Currently, as far as I know, disabling syncing for a document library is all or nothing - everyone with access can sync, or nobody can sync. Ideally, there would be an independent list permission, "disable syncing," which appears when assigning permi

  • Vendor invoice value in customer billing

    Hi, We have a third party sales scenario where we are required to charge the customer based on the vendor invoice that was issued. The vendor invoice could be lower / higher than the price in the Purchase order. Is there a way to be able to update th