Web-Crawler Compliant output from Robohelp 11

We are using RoboHelp 9 to generate our online document library in webhelp format, and are trying to get this searchable from 3rd party search engines (Google CSE or Swiftype) .  However, we found that the webcrawlers can't get any pages back from our site, and this seems to be related to the frames used in the webhelp html. I've seen comments from people who think that non-frame based HTML 5 would solve our problems and have been evaluating both Flare 10 and RoboHelp 11, and generating output in Responsive HTML5, to see if this works.
So far we've had no luck with this. We've even tried generating a sitemap.xml, which in theory solves all our problems but web-crawlers don't pick up anything from this either.
Anybody been down this route and got any ideas ?

Search engines will automatically only pickup all content when you create pure HTML WebHelp or 508 compliant output. For all other outputs, you need to provide a sitemap. This is also true for Responsive HTML5.
Just creating a sitemap is not enough, as you also need to register the sitemap with the search engine. It won't pick up the site map automatically. All major search engines have detailed instructions on how to submit site maps.
I have a script to generate site maps for help (Sitemap generator | WvanWeelden.eu) - commercial script - and I post those site maps to search engines.
Kind regards,
Willam

Similar Messages

  • Pagination issues in Word output from RoboHelp HTML 8

    Hi All,
    We're generating a Word document from our online help (RoboHelp HTML 8) with MS Word 2003 installed. The page numbers inserted automatically are wrong in that it's skipping numbers. For example, in scrolling down through the document, the page number after 95 is 97. Sections of the doc are paginated correctly, but then we come to a few pages where it skips a number in assigning page numbers. Some of our users have MS Word 2003 and others have 2007, but the pagination issue is occurring in both versions of Word. (BTW, I'm developing WebHelp Pro on Windows XP.)
    Today when I clicked in the Word TOC and pressed F9 to regenerate, it corrected some of the pagination problems but not all (go figure).
    Is there a patch I need to download or anything?
    Thank you!
    Mendonite

    Do the unnumbered pages have any content? This is ringing a vague bell and it was a Word issue related to blank pages.
    Have you actually printed the document?
    See www.grainge.org for RoboHelp and Authoring tips
    Follow me @petergrainge

  • Not able to view the webhelp output in Robohelp 9

    Hello,
    I am in a project where we have to create webhelp output from Robohelp 9. Whenever I generate the webhelp ouput I am not able to view the the contents. Although, the webhelp is opening in a browser window but there are no topics present in the contents. The other ouputs like .chm and .dox are fine, but there is some problem with the webhelp which I am unable to figure out.
    Regards,
    Ruso

    What browser are you using?
    If Chrome, try something else. Chrome works fine with help on the server but not locally. There is more in Snippets on my site about the Chrome issue.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • HP Network Printer periodically prints a page from a web crawler

    I support a HP Color LaserJet CP2025dn which ocassionally prints a page that says:
    GET http://www.baidu.com/ HTTP/1.1
    Host: www.baidu.com
    Accept: */*
    Pragma: no-cache
    User-Agent:
    Sometimes the GET HTTP/1.1 statement is from  http://www.sciencedirect.com.
    I'm guessing that this is caused by some sort of web crawler that hits port 9100 at this printer's network address. Is there any setting on the HP Color LaserJet CP2025dn that I might make to stop these pages from printing?
    [ I know that blocking this port at a router may solve the issue, but I'm looking for a solution that fixes the issue at the printer, thank you. ]
    Thanks for your help!
    -Ken

    Waynely,
    Simple Explanation:
    Internet users, likely Chinese, scan addresses (computers, or other devices that are networked) on the Internet that accept certain connections. They do this to probe for vulnerabilities to exploit, or to route traffic through machines as a “proxy.” Being a proxy means loading and sending information at the request of another. This can be used to bypass local security, such as China’s severe Internet censorship. It can also be used to hide traces back to the source, such as when a criminal doesn’t want to be located when hacking.
    The printers are set up such that they can communicate outside the local network. This allows printing from other networks, but can be located by Internet users doing scans. Printers appear to these scans as a possible proxy. The printed pages are the commands sent as a test of possible proxies for use. The command means “tell this device to load this website and report back.” The printer interprets the request as a print job and prints out the given command.
    This needs a fix, but is not critical. There is risk to lost supplies. The pages are also a nuisance. There is a smaller risk of a printer vulnerability being exploited for more serious use. This would be a complex and targeted attack instead of a broad scan.
    Possible Solutions:
    1) Block this traffic with a firewall
    2) Block this traffic with printer access controls
    3) Update the firmware on the printer to block this
    4) Change the network configuration to not be accessible from the Internet.

  • Create PDF output from Web Layout Report

    Is there a way I can create a PDF output from a Web Layout Report? (NOT from a Paper Layout). The reason I ask because editing on the .jsp Web Layout is very easy and flexible while editing on the Payer Layout is very difficult. Thanks.
    - Todd

    Hi Todd,
    Please refer to this link:
    paper layout & web layout
    As for your second statement, it is a matter of opinion and I beg to differ that .... "Web Layout is very easy and flexible while editing on the Payer Layout is very difficult."
    Best Regards,
    John

  • Generating 508 compliant output for access via SharePoint

    In order to make our WebHelp application accessible via a Sharepoint folder, our team lead was able to move all output files (from the ISSL/WebHelp folder) to a single folder then move the files from the resource, whdata, whgdata and whxdata folders to the same [new] location so that all files associated with the RoboHelp WebHelp project are in the same area.  Next, he edited the whproj.xml file and changed the datapath from "whxdata" to " " (null). This works fine for primary layout output that isn't 508 compliant, so I duplicated the same steps after generating 508 compliant output to see if I'd get the same results.  After moving all files to a single folder and uploading to a test server (not Sharepoint), the table of contents page [frame] "can't be found".  I know that the toc tag in the whtoc.xml file has a root attribute that points to "whtdata0.xml".  This file is in the same folder as everything else.  So why does it not show the TOC?
    Thanks,
    Developer2000

    Perhaps the best way to answer this is to ask what your response would be if your support people came along to you saying a customer didn't like the structure of the folders in your application so they had rearranged them and now it would no longer work, what should they do?
    Perhaps best not to post the reply here.
    The folders and files that RoboHelp generates have many inter-dependencies and it could be some are in files you cannot edit. Who knows? It is not something Adobe would document and I have not seen anyone post anything that would help you.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Is any one else have issues with interactive PDF output from InDesign CC (2014)

    I am currently trying to produce a document with a level of interactivity I have achieved many times before.
    The interactivity is just a web style menu, show/hide the menu shows button set one, clicking on those shows a second set of buttons, and clicking in one of those goes to destination page and hides all menu buttons once more.
    When output from ID CC 2014 the result is loads of missing buttons and no working menu.
    Its not a corrupt ID file. I tried a brand new empty file, recreated basic structure with simple one state buttons and achieved similarly useless results.
    If I back save using IDML and open in CS5, export from there, it works beautifully, so problem is not as many have been recently, to do with Acrobat browsers.
    If I back save and open in InDesign CC (not 2014), export from there, it doesn't work, so problem is shared there!
    Anybody any suggestions? Experienced similar and found a cure?
    SOLVED BY A COLLEAGUE:
    You can (no longer) use full stops/decimal points in button naming. Menu was hierarchically labelled as 1, (1.1, 1.2), (1.1.1, 1.1.2) etc.
    Renamed 1_1, 1_1_2 etc and all functionality has returned. Why you could and now you can't, who knows!
    Hope this helps you if you stumble across here with the same issue.

    Thank you for reporting this.
    I would consider this a bug in InDesign CC v9.x and v10.x and would report it:
    Adobe - Feature Request/Bug Report Form
    (Cannot test it right now.)
    Uwe

  • Script Errors in TOC from RoboHelp 9

    Hello, all. I can usually find the answer to my problems by searching existing threads, but I can't seem to find a solution this time. I recently upgraded from RoboHelp 7 to 9.0.2.271. I have published WebHelp locally and then checked files in to a Team Foundation Server to be deployed to a testing environment. Locally and on the QA site, the output looks fine for me and the TOC works as expected. However, other users on Internet Explorer 7 and 8 (I have IE8) get script errors when trying to click on pages in the TOC.
    I zipped my local files and dropped them on a shared drive and asked one of the users to copy them to his machine and open the start page with his browser. He still gets the script errors, whereas another user can open them locally with no problems. This makes me think the problem is not in the RoboHelp files themselves, but I am not sure.
    To further complicate matters, in the staging environment, a different server from QA, the TOC and Index are empty! This is obviously a separate issue, but I am stumped. This issue occurs in IE 6, 7 and 8. It does NOT occur in IE 9 or Firefox.
    Any ideas?

    This issue was fixed with a reply from another discussion:
    Re: RH8 > WebHelp > TOC won't render in IE8 via HTTPS - Fix/workaround?
    If TOC/Index/Search is not working in IE7/8/9 via HTTPS, Please try the following steps
    Go to https://acrobat.com/#d=WqbdTq-2R79ToU08-zfBEw
    Download IESearchIssue.zip file. Unzip it.
    It will create a folder IESearchIssue. It has two subfolders
    RH8.0.2
    RH9.0
    If you are using RoboHelp 8.0.2, go to RH8.0.2 folder.
    if you are using RoboHelp 9.0, go to RH9.0 folder
    Go to <RoboHelp Install Folder>\RoboHTML\WebHelp5Ext\template_stock folder and rename the file whutils.js  to whutils.js.bak
    Now paste the already copied new whutils.js in same folder.
    Now again generate webhelp output of the required project.
    Host it to your webserver and check if it works.

  • Complier output question

    I've been working on a java assignment that requires me to write a command line program which will read data from a text file, store the data in a vector and then save the vector to another file in a serialized form. I have written this program using the Kawa complier software.
    I have, what I believe to be, a working program; it complies without reporting any errors, but when I try to run the program, I get the following message in the complier output window.
    'Exception in thread "main"'
    Can someone tell me what this means?
    Thanks.

    Thanks for the suggestions, but I found a solution for it after posting the question. Another problem has come up now though.
    As I stated, the program is supposed to read the data from a text file, store it in a vector, and output it to another file in serialized form.
    The input file consists of many records of computers and peripherals. Each record uses four lines of the input file, and is displayed like this:
    Description (String)
    Price (float)
    VAT Price (float)
    Quantity (integer),
    Now I think the output file is supposed to look the same as the input file, but the one my program generates doesn't. My program code is as follows:
    import java.util.*;
    import java.io.*;
    import java.lang.*;
    class TextToVector
         public static void main(String [] args) throws IOException
              try
              Vector v = new Vector(); //creates the vector
              String inFile = new String ("C:/My Documents/V3/Data/stocklist.txt ");     //Informs the program of the name, type & location of the source file
              FileOutputStream outFileStream = new FileOutputStream("C:/My Documents/V3/outfile.dat ");     //Instructs the program where to create the output file, what type of file it should be & what to name it
              //the command paths used in these lines can be modified to read/send a file from/to
              //where ever the user wants, the user must also set these identical paths in
              //the "Interpreter Options" for this to work correctly
              ObjectOutputStream objOutStream = new ObjectOutputStream(outFileStream);
              objOutStream.writeObject(v);     //writes the object to the vector
              objOutStream.flush();                     //purges the objOutStream
              outFileStream.close();                     //closes the objOutStream
              catch (IOException e)
              System.out.println("Exception Writing File ");
              System.exit(1);
    public Vector readRecord(String inFile, Vector v) throws IOException
              String inString = new String(" ");
              String D = " ";
         float P = 0.0f;
         float V = 0.0f;
         int Q = 0;
              int count=0;
              BufferedReader buffReader = null;     //enures the BufferedReader is empty before beginning to read the source file
              buffReader = new BufferedReader(new FileReader (inFile));     //used to read & record each line of the source file
    try
         while((inString = buffReader.readLine()) !=null)
         count++;
         if(count==1)
         inString = buffReader.readLine();
         D = inString;
         if(count==2)
         inString = buffReader.readLine();
         P = (Float.valueOf(inString)).floatValue();
         if(count==3)
         inString = buffReader.readLine();
         V = (Float.valueOf(inString)).floatValue();
              if(count==4)
         inString = buffReader.readLine();
         Q = (Integer.valueOf(inString)).intValue();
         if(count==5)
         //System.out.println("Description " + D + " Price " + P + " VATPrice " + V + " Quantity " + Q);
         StockItem record = new StockItem(D,P,V,Q);
         v.add (record);
         count=0;
    buffReader.close();
    return v;
    catch(IOException e)
    System.out.println(e.toString());
    return v;
    class StockItem implements Serializable
         private String D;
    private float P;
    private float V;
    private int Q;
         StockItem()
         D = " ";
         P = 0.0f;
         V = 0.0f;
         Q = 0;
         StockItem(String Description, float Price, float VATPrice, int Quantity)
         D = Description;
         P = Price;
         V = VATPrice;
         Q = Quantity;
         public String getDescription()
    return D;
    public float getPriceValue()
    return P;
    public float getVATPriceValue()
    return V;
    public int getQuantityValue()
    return Q;
    Can anyone see anything wrong with this? I'd appreciate any suggestions. Thanks.

  • SES web crawler not able to connect to a particular URL, fetching fails

    Using Linux64Bit SES 11.1.2.0
    With Crawling depth - 5
    Web crawler has been created on "http://www.advancedinnovationsinc.com"
    Document fetching is getting failed at the depth 2.
    I have enabled the DEBUG logging from which I got to know that all the URL which is on depth 2 or more are getting *"HTTP/1.1 400 Bad Request"* error.
    So is this the SES product issue or the usage problem
    Below is the snap shot of generated log.
    01:43:16:054 INFO filter_0 urlString:http://www.advancedinnovationsinc.com/../index.html
    01:43:16:054 INFO filter_0 hostname :www.advancedinnovationsinc.com
    01:43:16:054 INFO filter_0 filepath :/../index.html
    01:43:16:054 INFO filter_0 Port :-1
    01:43:16:054 INFO filter_0 useSSL :false
    01:43:16:054 INFO filter_0 useProxy :true
    01:43:16:054 INFO filter_0 ==== DEBUG==== ------ New readWebURL ENDS ----------
    01:43:16:304 INFO filter_0 ==== DEBUG==== m_header:[Ljava.lang.String;@122b7db1
    01:43:16:304 INFO filter_0 ==== DEBUG==== statusLine:HTTP/1.1 400 Bad Request
    01:43:16:304 INFO filter_0 ==== DEBUG==== start:9
    01:43:16:304 INFO filter_0 ==== DEBUG==== statuscode:400
    01:43:16:304 INFO filter_0 ==== DEBUG==== URLAcess.java:2232
    01:43:16:321 INFO filter_0 EQG-30009: http://www.advancedinnovationsinc.com/../index.html: Bad request
    01:43:16:321 INFO filter_0 Documents to process = 7
    01:43:16:329 INFO filter_0 ==== DEBUG==== m_currentURL:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 ==== DEBUG==== m_urlString:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 DEBUG filter_0 Processing http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 ==== DEBUG==== ------ New readWebURL STARTS ----------
    01:43:16:329 INFO filter_0 urlString:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 hostname :www.advancedinnovationsinc.com
    01:43:16:330 INFO filter_0 filepath :/../create.html
    01:43:16:330 INFO filter_0 Port :-1
    01:43:16:330 INFO filter_0 useSSL :false
    01:43:16:330 INFO filter_0 useProxy :true
    01:43:16:330 INFO filter_0 ==== DEBUG==== ------ New readWebURL ENDS ----------
    01:43:16:736 INFO filter_0 ==== DEBUG==== m_header:[Ljava.lang.String;@122b7db1
    01:43:16:736 INFO filter_0 ==== DEBUG==== statusLine:HTTP/1.1 400 Bad Request
    01:43:16:736 INFO filter_0 ==== DEBUG==== start:9
    01:43:16:736 INFO filter_0 ==== DEBUG==== statuscode:400
    01:43:16:736 INFO filter_0 ==== DEBUG==== URLAcess.java:2232
    01:43:16:738 INFO filter_0 EQG-30009: http://www.advancedinnovationsinc.com/../create.html: Bad request

    Using Linux64Bit SES 11.1.2.0
    With Crawling depth - 5
    Web crawler has been created on "http://www.advancedinnovationsinc.com"
    Document fetching is getting failed at the depth 2.
    I have enabled the DEBUG logging from which I got to know that all the URL which is on depth 2 or more are getting *"HTTP/1.1 400 Bad Request"* error.
    So is this the SES product issue or the usage problem
    Below is the snap shot of generated log.
    01:43:16:054 INFO filter_0 urlString:http://www.advancedinnovationsinc.com/../index.html
    01:43:16:054 INFO filter_0 hostname :www.advancedinnovationsinc.com
    01:43:16:054 INFO filter_0 filepath :/../index.html
    01:43:16:054 INFO filter_0 Port :-1
    01:43:16:054 INFO filter_0 useSSL :false
    01:43:16:054 INFO filter_0 useProxy :true
    01:43:16:054 INFO filter_0 ==== DEBUG==== ------ New readWebURL ENDS ----------
    01:43:16:304 INFO filter_0 ==== DEBUG==== m_header:[Ljava.lang.String;@122b7db1
    01:43:16:304 INFO filter_0 ==== DEBUG==== statusLine:HTTP/1.1 400 Bad Request
    01:43:16:304 INFO filter_0 ==== DEBUG==== start:9
    01:43:16:304 INFO filter_0 ==== DEBUG==== statuscode:400
    01:43:16:304 INFO filter_0 ==== DEBUG==== URLAcess.java:2232
    01:43:16:321 INFO filter_0 EQG-30009: http://www.advancedinnovationsinc.com/../index.html: Bad request
    01:43:16:321 INFO filter_0 Documents to process = 7
    01:43:16:329 INFO filter_0 ==== DEBUG==== m_currentURL:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 ==== DEBUG==== m_urlString:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 DEBUG filter_0 Processing http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 ==== DEBUG==== ------ New readWebURL STARTS ----------
    01:43:16:329 INFO filter_0 urlString:http://www.advancedinnovationsinc.com/../create.html
    01:43:16:329 INFO filter_0 hostname :www.advancedinnovationsinc.com
    01:43:16:330 INFO filter_0 filepath :/../create.html
    01:43:16:330 INFO filter_0 Port :-1
    01:43:16:330 INFO filter_0 useSSL :false
    01:43:16:330 INFO filter_0 useProxy :true
    01:43:16:330 INFO filter_0 ==== DEBUG==== ------ New readWebURL ENDS ----------
    01:43:16:736 INFO filter_0 ==== DEBUG==== m_header:[Ljava.lang.String;@122b7db1
    01:43:16:736 INFO filter_0 ==== DEBUG==== statusLine:HTTP/1.1 400 Bad Request
    01:43:16:736 INFO filter_0 ==== DEBUG==== start:9
    01:43:16:736 INFO filter_0 ==== DEBUG==== statuscode:400
    01:43:16:736 INFO filter_0 ==== DEBUG==== URLAcess.java:2232
    01:43:16:738 INFO filter_0 EQG-30009: http://www.advancedinnovationsinc.com/../create.html: Bad request

  • Producing a WORD document from RoboHelp

    Hi,
    I am trying to produce a WORD document from RoboHelp using the Single Source Layout. The Output View says," Waiitng for Project Documentation VBA macros to be registered." I then receive an error message stating that WORD is not responding.
    How do I resolve this issue? Thank you in advance for your feedback.
    okaye

    Hi there
    Sorry, but my crystal ball is broken again so I can't tell what version or flavor of RoboHelp you are using. Nor am I able to discern what version of Word is on your system.
    Can you help us out a tad and tell us this info?
    Thanks... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7 or 8 moments from now - $24.95!
    Adobe Certified RoboHelp HTML Training
    SorcererStone Blog
    RoboHelp eBooks

  • Do you know if it's possible to generate Word 2003 files from RoboHelp if we're using Office 2007 and RoboHelp 2007?

    Do you know if it's possible to generate Word 2003 files from
    RoboHelp if we're using Office 2007 and RoboHelp 2007? We are
    thinking of upgrading, but have customers that would still require
    Word 2003 formats because they won't have 2007 installed.
    Thanks!

    Hi NewtoRobohelp
    Unfortunately I don't have Office 2007 in front of me to test
    with. But I'm thinking that as long as Office 2007 still produces
    the same formats as 2003, you could do it this way.
    From RoboHelp 7 and using Office 2007, generate Printed
    ouptut. Open said printed output in Office 2007 and perform a Save
    As. Save the document as a RTF (Rich Text Format) document.
    RTF is more universal. So I'm thinking this may be a possible
    way around it. There is also the possibility that Word 2007 offers
    up an ability when saving a file to save in an older format.
    Cheers... Rick

  • CS3 Save for Web bugs - No answer from Adobe in all forum posts

    I just upgraded to Illustrator 13.0.02 and the problem is the same: Slice names and output settings are not remembered/saved like all previous AI versions.
    I don't understand why this post was closed: CS3 Save for Web Problems
    http://www.adobeforums.com/webx?128@@.3bc41aeb.
    and this one: "Save for Web" names of frames vanish
    http://www.adobeforums.com/webx/.3bc4cd31/2
    It is the same as: AI CS3 - Save for Web & Devises image name problem
    http://www.adobeforums.com/webx?128@@.3c057eab
    Concerning slice name: It looks like they now have to be saved via drop down menus in order for Illustrator to remember the slice names for export again: "Object - Slice - Slice Options, and then in Save for Web, set the Output Settings for Saving Files"
    In my opinion this is incredibly poor UI design. In prior versions of Illustrator, I would save the names in the Save for Web dialog box by simply double clicking the slice frame, and it would remember the names of my slices for export again.
    Clicking through drop down menus just to name a slice is inefficient compared to just double clicking a slice frame to name the slice.

    This is a bit of an old thread, but I too have recently discovered this problem in working with AI CS3.
    I contacted Adobe support with the question. I asked them why it was not possible to select and optimize individual slices in the Save for Web and Devices dialog in CS3, and then maintain those settings after saving the slices or clicking "Done"... even though that very feature was available and working in CS2.
    Adobe's answer was, quite simply, that they have ceased any development on CS3, including bug fixes, and that anyone who wants the problem fixes would have to buy CS4 in order to "fix" the problem.
    In short, they are quite aware of the problem, but would rather have us pay for a new product in order to have it fixed, than to pay a programmer to spend a few hours or a few days in tracking down the problem and getting it sorted out. This is their short-term solution to a long-term problem.
    There is a workaround to the slice naming, as you have found - name the slices from the Object - Slice - Slice Options menu. It's a royal PITA, I know, but it does maintain the slice name settings.
    However, there is no real workaround to save the optimization and output settings (such as color tables and JPEG/GIF/PNG settings) for each slice. It's a completely broken feature, or in Adobe's own words, "a problem". A big fat bug. Let us not mince words here - it is technical and corporate incompetency. Technical incompetency can be excused - publishing a new build will fix the problem. But corporate incompetency, which tells the programmers that they don't need to fix the problem for "marketing" reasons, is totally inexcusable.
    It doesn't cost Adobe anything to just shelve a problem... at least, not now. But I refuse to buy Illustrator CS4 as a result, because I don't want to give in to their ineptitude and lack of attention to the customer in this case. Which costs them more now, to pay the programmer to fix the error and then publish a new build on their web server... or to tell the customer that the problem won't be fixed and to buy the newest version? You do the math. Read 'em and weep.
    Makes you want to migrate to Fireworks for web comp design, doesn't it. At least Illustrator has an excuse - it's an all-purpose vector graphics application, not specifically a web comp design app. If this were Fireworks, on the other hand, I think that there would be oodles of furious programmers screaming colorful obscenities at Adobe's front door.
    I really like Illustrator for what it does, but I'm not using CS3 for any more web comps after this.
    Jeff Chapman

  • Creating a Web Crawler

    Hi! I have a question. I've already created a web crawler and I used a web data source for it. How can I check what data source my crawler is connected to? I forgot what data source my crawler is using. Thanks.

    If you're able to, the quickest way may be to look in the database at the DATASOURCEID column in the appropriate row in the PTCRAWLERS table, or "select ds.NAME from PTDATASOURCES ds, PTCRAWLERS cr where cr.DATASOURCEID = ds.OBJECTID and cr.OBJECTID = <yourcrawlerid>"

  • Poor Output from AME (Premiere 4)

    I love Premiere, but I hate the Adobe Media Encoder.  I think part of the problem may be that I am producing custom video for desktop and the web as opposed to media for DVD, Blu Ray, etc.
    I am producing a series of short 16:9  Quicktime videos at a custom size from high-res progressive source files rendered in 3DS Max.
    I have found that the Media encoder settings seem to be very restrictive with regard to bitrate settings, and that the best available quality when encoding to H.264, for example, is just plain crappy, and nowhere near as good as I can produce if I use the Sorenson Squeeze trial.
    I think some codecs are just poorly implemented.  Both of the Sorenson codecs can not be configured, and bitrate options are limited.  The resulting video is horrible compared to the same result from the Sorenson encoder.
    The animation codec for desktop QuickTime is just plain broken.  even at the best quality settings, the output is full of noise and artifacts and is markedly inferior to the output from the video encoder in Max or even in the cheap little Camtasia screen recorder.
    In order to get clean video in QuickTime format, I have tried every combination of codecs I can think of. I can't set the bitrate high enough with H.264 or most lossy codecs, and animation and graphics codecs are clearly broken.  I seem to have to export huge files directly to component or raw output in order to get a high enough quality video.... and then, of course I have to find another tool to transcode that to something I can actually distribute.
    What am I missing? I really want to use H.264. Is there a way to get at the deeper encoding options so I ca set up the VBR options and set a higher bitrate than the little slider on the options panel seems to allow?
    What are other people doing?  Are most people using a third party compression or encoding utility like Squeeze?  If so, it sucks, because I didn't count on having to spend an additional 900 bucks on top of the cost of Adobe Master Collection.

    ThreeDify,
    Thanks for the info.  I will try this out.  I'm getting very discussed with AME.  Yeah, there are lots of presets...but the output is terrible.  Kinda like you find with having to use Squeeze.  Am I missing something?  There are posts all over the place that AME doesn't get it done so to speak.  Where is Adobe addressing these issues?
    Does AfterEffects use a different encoder?  Why can I produce 'high quality' with AE and not with AME???  Yeah it takes longer to encode in AE but the final output is good.  Hmmm.
    My goal here is to render to 1280x720 and make it for presentation with my 'high power' computer.  Then upload it to youtube HD.  When I try to convert it to Youtube it looks like crap.
    Question:  What viewer do you use for .f4v files?  I'm using Adobe Media Player, and the .f4v files I have already made don't look good at all.
    I view with Quicktime and Windows Media Player.  Get two different levels of quality with both.  Same problem on the internet, IE8 doesn't view youtube videos as well as FireFox...  Yes the delima goes on, and on, and on.........
    I have got to get this to work..
    I'm going to watch this thread for awhile.  Hopefully others will share their successes.
    Thanks.
    Dave

Maybe you are looking for

  • Strangeness: Connecting to internet with 10.3.9 vs 10.4.2/3 via bluetooth.

    Hi peoples, I was wondering if you could help? I've been connecting to the internet wirelessly for years now, just bought a new phone (Samsung D500 3G and use the DBT 120 Rev.4 adaptor) and have noticed a strange anomaly in that when I connect using

  • Adobe Media Encoder Dynamic Link Server NOT CONNECTING to Premiere Pro

    I have Media Encoder CS5, as part of the Creative Suite 5. My programs say that I have no updates to install, so everything is up to date. I am having an issue with the dynamic link server. Here's what has happened thus far. Recently when I have trie

  • Itunes won't display my episode descriptions

    Hey, I'm hoping someone can help me out. I'm pretty new at this. I just submitted my first podcast to itunes and the listing won't display me episode descriptions. Would someone be so kind as to check my RSS file? The URL of the RSS file is www.indig

  • How to get old itunes back

    itunes 11 ** ***. how do i get the old one back? <Edited by Host>

  • Corrupted ipod with files saved in it!!  help

    since my ipod can be used to save files, i used to save alot of pictures and etc. but now when i connect my ipod to my itunes, it says i need to restore it, because it is corrupted. how do i fix my ipod in a way where i can still save my files safely