Search Engine Crawls

Our web site is loaded with PDF files - when the major search
engines crawl a site for indexing and ranking do these types of
files get analyzed also for keywords?
thak you..

> Title is one of the best ways to get Google to see your
keywords.
I have found this to be the case, as well.
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
==================
"Dooza" <[email protected]> wrote in message
news:fqmhi5$2d2$[email protected]..
> RDUBILL wrote:
>> Thanks - looks like you're kind of rolling the dice
with PDF's....
>>
>> What about dynamic pages?
>>
>> How would a 'crawler' know about our part
descriptions stored in and SQL
>> database?
>>
>> This database contains all of our descriptions that
customers would
>> search on?
>
> I built a store with Dreamweaver that is dynamic, with
over 10,000 items,
> and many levels of categorisation, Google actually loves
it. Many of our
> products get in the top 5 on Google. Have a look at the
browse by category
> page on here
http://www.aclighting.com/shop/.
When I created a google
> sitemap there were over 50,000 dynamic pages seen. This
is growing daily,
> and we get loads of traffic from Google because of it.
>
> The important thing is the page title. As you go deeper
into the
> categories each level is displayed in the title. This
all helps. Title is
> one of the best ways to get Google to see your keywords.
>
> Steve

Similar Messages

  • Can search engines crawl Spry Accordions?

    My assumption when I decided to implement an accordion was that search engines would be able to crawl the content, due to the text being in the main document's code and not in a framed page.  With that in mind, I created multiple pages utilizing the Accordion to encompass all or most of the text content on the page.
    So the question is, was I right, and can search engines crawl Accordion content or did I do something terrible in search engine optimization?

    Yes you can, thanks Arnout.
    You can download the source lynx browser (here for instance: http://www.vordweb.co.uk/standards/download_lynx.htm), but the DOS interface is unwieldy to say the least.  As for the Firefox plugin - I tried the Yellowpipe LynxRight version (https://addons.mozilla.org/en-US/firefox/addon/1944).  Works fine (right-click yields that page in text form in a popup window), but again abit unwieldy compared to Delorie: very small text, embedded links not clickable.
    I find that the Delorie implementation is the most useful - easily read, embedded page links clickable, etc.  Delorie's BIG tradeoff is that it's site-specific: YOUR site, or a client's (since you have to place that delorie.html file in the root folder).  So depends on your purposes.

  • How can i optimize my SEARCH ENGINE???I use godaddy.i contact them and they give me this conclusion:Search engines are not pulling up your other pages on your website because it is created with javascript. the content on the pages, (photo gallery.....

    conclusion from godaddy:
    Search engines are not pulling up your other pages on your website because it is created with javascript.
    the content on the pages, (photo gallery, My Recipes/Blog, History, Friends) need to be formatted to HTML, PHP or CSS. No search engines (google, Yahoo, Bing) can crawl//analyze webpages built with javascript.
    See if Apple allows iweb to create the pages in HTML format and ask how to make those changes.
    I spend more then 1 hour on the phone with them but we didn't find the solution...
    please i need help because like this nobody can rich my website...
    www.chefdiego.com
    thank you

    The part of iWeb pages that give the search engines problem is the navbar.  That's javascript based.  One way to get around that is to create another blank page in the site that's not  included in the navbar. 
    Add a text based navbar in that page with just the other pages in the site. 
    Also you can use that page for creating the sitemap for Google at Create your Google Sitemap Online - XML Sitemaps Generator.
    Have you let Google know your site exists? This website describes how to get it recognized: Get your iWeb website found in search engines like Google Yahoo MSN and Bing.
    OT

  • Can't submit keywords to GoDaddy's new Search Engine Vis. Service

    So, I bought (for $3.00 per month) GoDaddy's SearchEngineVisibility service. It allowed me to make and submit a SiteMap, something GoDaddy would not let me do with the Rage software I bought. So far so good - but GoDaddy shows you a 10 point checklist of how you are doing. It says I have not submitted keywords. I tried typing in keywords on the GoDaddy interface. That did not work. I called GoDAddy. They said the keywords must be defined, outlined, selected - whatever in the program that built the website. I said iWeb but I did not find any function in iWeb that would let me choose keywords. They said iWeb can't do that. Said iWeb is not for business use. It is more for people who want to share photos with grandma and such.
    So, how do I submit my keywords to GoDaddy? I would have thought that once I put up the Sitemap that Google/Bing, etc. would be able to crawl the site and automatically select keywords.
    Someone said I need to just grab the HTML code from my iWeb site, buy Dreamweaver and paste the HTML code there and then Dreamweaver would allow me to define keywords and submit them.
    Any ideas?

    After creating the sitemap.xml, Sitemap Automator uploads it to the root folder on the server - if you enter the correct info!
    Before it can add it to Google, Yahoo, Bing and ASK.com, you have to notify each one of your site's existence and, in the case of Google, Yahoo and Bing, open accounts with them. Google and Yahoo require an HTML file and Bing an XML one.
    There is no screen in iWeb. You decide what keywords that people will use to search for any page of your site and use them in the text of that page. The search engines will find them. SEKeyword is just a tool to help you find relevant keywords and your Google Dashboard will give you a list of the ones they find relevant once they have indexed your site.
    The steps required to submit your website to search engines are listed HERE.
    "I may receive some form of compensation, financial or otherwise, from my recommendation or link."

  • TREX search engine error

    Hi,
      I am new to KM.I installed TREX server in separate Host.I am accessing the TREX server into our portal.I created the Web Respository and craeted Crawler and Index.When  i check the TREX monitor and Crawler monitor everything running fine.But when i am Searching the content throough search engine i am getting the following error.
    <b>Search Failure
    com.sapportals.wcm.WcmException: Index does not exist; indexId = "test_index" (Errorcode 7)
    An unexpected severe error occurred during the search call.  If the situation persists, inform your system administrator.
    </b> 
    Can anyone suggest me where exactly i am getting the problem and need solution.
    Regards
    Usman

    Hi Usman,
    you probably deleted an index on TREX site, without deleteing the index in KM Index Administration.
    This is simply not supported, because KM has no chance to register it.
    You have to choices:
    Either 'Reindex' this index in the index administration.
    The other option is to delete it.
    Regards Matthias Röbig-Landau

  • Flash Search Engine SDK

    Hi Folks,
    I´ve got some questions concerning the lately released
    Flash Search Engine SDK.
    Well, what I know so far is it allows spiders from Google or
    Yahoo to read out the text used in .swf files.
    But how does it really work? For example:
    If you avoid using flash on a website the normal way to get
    an acceptable page rank is to use things like title, keywords which
    are repeatedly used in continuous text, h1/h2 tags, file names,
    link names, ...
    But which of those things apply on the .swf files as well?
    Are link names recognized as related to their target and by
    their content?
    Do keywords defined in the head apply on the .swf files?
    Do the spiders see dynamic generated text inside the .swf
    files (text you read out of external files, xml files, ...)
    I hope you can give me some useful information on how it
    works exactly because I couldn´t find any useful confirmed
    information on the internet so far.
    Thanks beforehand.

    The spiders see ALL text on all pages. They actually run a
    specialized version of the player, and they go through and catch
    any text. They don't catch text that is invisible, off the display
    list, or off the stage, and they only catch text that is still text
    (so no broken apart text). They do see the dynamically-generated
    text, but the specialized player does not allow the loading of
    files, so if it is XML-generated, they don't catch it. A caveat
    there, the google spider will mark the XML file that you try to
    load, and will crawl and cache it separately.
    Also, the text does not have size or color information, so
    weighting information visually does not work.
    Google has not released anything (even to the Adobe devs
    working on this) about how the information is getting weighted.
    Here is a link to Jim Corbett's MAX session this year, which
    covers all of this information and more in detail:
    http://tv.adobe.com/#vi+f15384v1000

  • How can I change my search engine provider

    how do I change my search engine provider It will not change to Bing.com
    == This happened ==
    Every time Firefox opened
    == crawler.com took over my search engine

    With having this problem with trying change my search engine I got a new one. I got a shortcut on my desktop that don't do anything and I can not remove it. Hit delete and says can not find it. Hit properties and go to security and it says information unavailable or can't be displayed. And web document dose not show. Any help on this?

  • Need advise from experts : APEX & Search Engines

    Dear Experts,
    I need one advice. I intend to develop my site in APEX. But I am not sure if a site made in APEX will be crawled & indexed well by the search engines so that it appears when somebody is searching for the related information.
    Can anyone please enlighten ?
    Regards,
    Ashish Agarwal
    http://www.asagarwal.com

    Patrick,
    Don't worry, I didn't take it personally ;) Just wanted to point out that using the site directive can give different results (or perhaps the same results in a different way) to a plain old 'type in a name' search.
    We do use Session 0 on the Apex Evangelists site (for example in the initial link to the home page), however when you navigate to the login page a new session id is generated (which is why that has been indexed multiple times by Google), also some other pages (for example the EuroTraining page have been indexed multiple times due to external links to our site referencing a link which already has a session id, rather than using the alias for it).
    You're right you can spend a huge amount of time tweaking and cajoling things to increase your page rank in Google (and other search engines of course), infact I know companies that specifically employ people whose sole job is SEO. You're right, there are still things we can do to make the indexing a bit better. However for our purposes (at that time), the fact that the login page etc are indexed multiple times (for various reasons) doesn't 'hurt' us too much, we saving those changes for the next release of the site ;)
    John.

  • Create a Search Engine

    Hi,
    I would like to know how to create a search engine in the report section. For an example, if a list of names display, I want the user to be able to find a name without going down the list. In another words, just type a name (last, first), once the name is found, the name will be display on the screen automatically.
    thanks -Patty

    Vikas,
    You're correct - that whitepaper is specific to the HTML DB 1.5 documentation. The online help file names changed in 1.6, so it's out of date.
    What you did to generate this file is fine, at least for purposes of this example. You'll find that many titles are duplicated, and ultimately, it's the text in most of the <h1> tags that you want. There is a file called doc_map.xml, which you could consider transforming to whatever you need.
    Your request to "zoom in" is ultimately a request to crawl the documents. Oracle Text does not automagically crawl documents, but as you probably guessed, you could always build something to do that (or use Oracle UltraSearch/Enterprise Search). There's some demonstration code on the Text site on OTN (http://www.oracle.com/technology/sample_code/products/text/index.html), but I've personally never examined it.
    Joel

  • Search engine optimizations servlet

    i am trying to write a servlet which will look at urls and covert the & and ? characters into other characters, like "/" and ","
    this is because search engines dont like to crawl pages with ? in the url.
    using servlets i know i can translate a url which has been seo optimized to be the "real thing" but is there any way to do the opposite? encode urls to be seo optimized without going in an manually changing the hrefs in the jsps?

    A Servlet Filter sounds like what you need here. Encoding the URL on the response.

  • Flash search engine setup

    What is the best way to set up a site completely done in
    Flash so that
    search engines will index it? Preferably a way that will not
    be costly to my clients.

    The spiders see ALL text on all pages. They actually run a
    specialized version of the player, and they go through and catch
    any text. They don't catch text that is invisible, off the display
    list, or off the stage, and they only catch text that is still text
    (so no broken apart text). They do see the dynamically-generated
    text, but the specialized player does not allow the loading of
    files, so if it is XML-generated, they don't catch it. A caveat
    there, the google spider will mark the XML file that you try to
    load, and will crawl and cache it separately.
    Also, the text does not have size or color information, so
    weighting information visually does not work.
    Google has not released anything (even to the Adobe devs
    working on this) about how the information is getting weighted.
    Here is a link to Jim Corbett's MAX session this year, which
    covers all of this information and more in detail:
    http://tv.adobe.com/#vi+f15384v1000

  • APEX aplications and search engines

    I have added a google analytics sript to my apex application home page but when I validated it on a google site it said my script had not been found. The same piece of code is working fine when included in a "standard" index.html page. Also, people can post messages on my site but when I google their email addresses I can find them on other websites but not on mine. What can be a cause of it?
    P.S.
    My website name can be found in search engines, so it was indexed.
    Message was edited by:
    ArtS

    Hi ArtS,
    I think my post is similar to yours:
    Oracle APEX site unable to be crawled by Google crawlers / SEO?
    Regards,
    Gareth
    Blog: http://garethroberts.blogspot.com/

  • How do I add multiple scripts from search engines to my meta tag properties?

    I currently have copied the goolge script for website varification and analytics, etc and pasted it into my meta tag properties dialog box. There is no problem as far as Google varifying the page. However, I would like to copy Bing's search engine script into my meta tag in addition to Googles script. How do I go about doing this? Do I hit the return on my keyboard under the ending of Googles script, then paste in the Bing script?
    The the last part of the Google script ending in this:
    </script>
    (paste new script from Bing here?)
    Will this cancel out each other and cause problems?
    Can someone walk me through this process, because Bing's search engine will not varify my site through two of the three other methods.
    Ben

    Adding a script after the closure of previous script is the way to go i.e. right after the </script> tag.
    So it should look something like below:
    <script>
    Google's script
    </script>
    <script>
    Bing's script
    </script>
    Cannot comment on one interfering with the other since it really depends on what exact code is there in the scripts. Google and Bing help resources will be able to help more with this.
    Thanks,
    Vikas

  • How do I delete a toolbar that was added by a plugin? Plug-in's been uninstalled, but the toolbar is still there and any new tabs open on their search engine (even though I have Tab Mix set to open tabs on a duplicate page). Help?

    Running Firefox on Windows XP3. I installed a plugin from Savevid.com, which they now require. (I've used that site for at least a year with no problem ever.) Everything was fine but it added a toolbar to my browser, and it seems to have reset Firefox somehow so that any new tab opens on "Searchqu", some low-rent search engine. I uninstalled the plug-in altogether, but the toolbar is still there, and I can't make the reset on the search engine go away. I use Tab Mix Plus (also never had this problem before with it), but no matter what I do, the tabs will not open to anything other than searchqu. (Tab Mix has the option to open to a duplicate tab - what it was using BEFORE all this - but changing that no longer has any effect.)
    Anyone know how to deal with a problem like this?

    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
    *Do NOT click the Reset button on the Safe Mode start window or otherwise make changes.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes
    You can also check for problems with the localstore.rdf file.
    *http://kb.mozillazine.org/Corrupt_localstore.rdf

  • Error message trying to install new search engines

    I am running Firefox 19.0.2 installed yesterday on a new desktop PC at the office. (Old PC failed to boot on Monday.) I have completed most of my customization without difficulty. However, when I try to install new search engines, it doesn't seem to matter which one, I get a pop-up error message:
    "Sorry, you need a Mozilla-based browser (such as Firefox) to install a search plug-in."
    I had no problems installing new search engines under Firefox 16 3rd quarter of last year when I started this job. Why am I unable to install search engines in FF 19?

    Hello lafritz65, '''Try Firefox Safe Mode''' to see if the problem goes away. Safe Mode is a troubleshooting mode, which disables most add-ons.
    ''(If you're not using it, switch to the Default theme.)''
    * You can open Firefox 4.0+ in Safe Mode by holding the '''Shift''' key when you open the Firefox desktop or Start menu shortcut.
    * Or open the Help menu and click on the '''Restart with Add-ons Disabled...''' menu item while Firefox is running.
    ''Once you get the pop-up, just select "'Start in Safe Mode"''
    '''''If the issue is not present in Firefox Safe Mode''''', your problem is probably caused by an extension, and you need to figure out which one. Please follow the [[Troubleshooting extensions and themes]] article for that.
    ''To exit the Firefox Safe Mode, just close Firefox and wait a few seconds before opening Firefox for normal use again.''
    ''When you figure out what's causing your issues, please let us know. It might help other users who have the same problem.''
    thank you

Maybe you are looking for

  • A better way to make a pdf form?

    Hi, I've got a form that I use. Here is my process: Created form layout in InDesign, export to PDF, in PDF create form fields that are clickable, check boxes. Is there a way to give those check boxs in InDesign som sort of script or name that will ge

  • Illustrator wont copy & paste at 300 ppi regardless of settings

    I just recently reformated my computer and I've been having an issue with copy and paste from Illustrator to Photoshop. Before I could copy a 2"x2" vector box from Illustrator into photoshop making a new document and the size and resolution would mat

  • How to use Digital output to turn on sensor for Analog Input?

    I am trying to use a digital output to turn on an array of sensors that I then wish to read on 16 analog input lines. I have a 6024E DAQ card. I am planning to take data at 10-20 hz, so not terribly fast, but I will be acquiring for long periods of t

  • Monitor off

    The notebook HP 630 (A6F38EA#ABZ) off the monitor after 1 minute, I changed the settings on both the bios that has windows 7, but without success. Regards

  • [Resolved]VO on Ref Cursor - attempt to save = RowNotFoundException

    I have a view object created from a ref cursor following 27.8.4 in the Oracle Application Development Framework Developer's Guide for Forms/4GL Developers. Using JDev 10.1.2 (yes, old version, but is consistent with all applications on current app se