No robots.txt?

Hello,
just a short question: Why does Muse not create a robots.txt?
A couple of months ago a had a client who didn´t showed up on any search results but the site was online for more than a year.
We investigated and found out that the client had no robots.txt on his server. Google mentions ( sorry i cannot find the source right now) that it will not index a page if there is no robots file.
I think that it is important to know this. It would be cool if there is a feature in the export dialog ( checkbox "create robots.txt" - and maybe a Settings Panel (follow, nofollow, no directories...)
Regards
Andreas

Here's one example of the text Google is posting:
http:/ / webcache. googleusercontent. com/ search? rlz= 1T4GGLR_enUS261US323&hl= en&q= cache:SSb_hvtcb_EJ:http:/ / www. inmamaskitchen. com/ RECIPES/ RECIPES/ poultry/ chicken_cuban. html+cuban+chicken+with+okra&ct= clnk            Robots.txt File            May 31, 2011
            http:/ / webcache. googleusercontent. com/ search? q= cache:yJThMXEy-ZIJ:www. inmamaskitchen. com/ Nutrition/            Robots.txt File            May 31, 2011
Then there are things relating to Facebook????
http:/ / www. facebook. com/ plugins/ like. php? channel_url= http%3A%2F%2Fwww. inmamaskitchen. com%2FNutrition%2FBlueberries. html%3Ffb_xd_fragment%23%3F%3D%26cb%3Df2bfa6d78d5ebc8%26relation%3Dparent. parent%26transport%3Dfragment&href= http%3A%2F%2Fwww. facebook. com%2Fritzcrackers%3Fsk%3Dapp_205395202823189&layout= standard&locale= en_US&node_type= 1&sdk= joey&send= false&show_faces= false&width= 225
THNAK YOU!

Similar Messages

  • Robot.txt  and duplicat conent i need help

    Hello Guys i´m new in BC i have 2
    Questions. 1
    My startpage is available as xxxx.de
    and xxxx.de/index.html
    and xxx.de/index.aspx
    how can i change this "Duplicate Content!!!!
    and the 2 Questions where i have to load the robot.txt.
    THX

    As long as you do not link to other versions and be inconsistent you do not need to worry about your start page.

  • Use of robots.txt to disallow system/secure domain names?

    I've got a client who's system and secure domains are ranking very high on google.  My SEO advisor has mentioned that a key way to eliminate these URLs from google is through the use of disallowing content through robots.txt.  Given BC's unique nature of dealing with system and secure domains I'm not too sure if this is even possible as any disallowances I've seen or used before have been directories and not absolute URL's, nor have I seen any mention of this possibility around.  Any help or advice would be great!

    Hi Mike
    Under Site Manager > Pages, when accessing a specific page, you can open the SEO Metadata section and tick “Hide this page for search engines”
    Aside from this, using the robots.txt file is indeed an efficient way of instructing search engine robots which pages are not to be indexed.

  • Robots.txt and Host Named Site Collections (SEO)

    When attempting to exclude ALL SharePoint Sites from external indexing, when you have multiple web apps and multiple Host Named Site Collections, should I add the robots.txt file to the root of each web app, as well as each hnsc? I assume so, but, thought
    I would check with the gurus...
    - Rick

    I think, one for each site collection as each site collection has different name and treated as web site.
    "he location of robots.txt is very important  It must be in the main directory because otherwise user agents (search engines) will not be able to find it.  Search engines look first in the main directory (i.e.http://www.sitename.com/robots.txt)
    and if they don’t find it there, they simply assume that this site does not have a robots.txt file"
    http://www.slideshare.net/ahmedmadany/block-searchenginesfromindexingyourshare-pointsite
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Robots.txt -- how do I do this?

    I'm not using iWeb, unfortunately, but I wanted to protect part of a site I've set up. How do I set up a hidden directory under my domain name? I need it to be invisible except to people who have been notified of its existence. I was told, "In order to make it invisible you would need to not have any links associated with it on your site, make sure you have altered a robots.txt file in your /var/www/html directory so bots cannot spider it. A way to avoid spiders crawling certain directories is to place a robots.txt file in your web root directory that has parameters on which files or folders you do not want indexed."
    But, how do I get/find/alter this robots.txt file? I unfortunately don't know how to do this sort (hardly any sort) of programming. Thank you so much.

    Muse does not generate a robots.txt file.
    If your site has one, it's been generated by your hosting provider, or some other admin on your website. If you'd like google or other 'robots' to crawl your site, you'll need to edit this file or delete it.
    Also note that you can set your page description in Muse using the page properties dialog, but it won't show up immediately in google search results - you have to wait until google crawls your site to update their index, which might take several days. You can request google to crawl it sooner though:
    https://support.google.com/webmasters/answer/1352276?hl=en

  • Error 404 - /_vti_bin/owssvr.dll  and robots.txt

    Hi
    My webstats tell me that I have had various Error 404s and
    this is because of files being "required but not found":
    specifically /_vti_bin/owssvr.dll and robots.txt.
    Can someone tell me what these are?
    Also, there are various other status code pages coming up
    such as
    302 Moved temporarily (redirect) 6 27.2 % 2.79 KB
    401 Unauthorized 5 22.7 % 9.32 KB
    403 Forbidden 3 13.6 % 5.06 KB
    206 Partial Content
    Why are these arising and how can I rid myself of them?
    Many thanks : )

    Example of httpmodule that uses the PreRequestHandlerExecute and how to return if it encounters the owssvr.dll
     class MyHttpModule : IHttpModule, IRequiresSessionState
       public void Init(HttpApplication context)
                context.PreRequestHandlerExecute += new EventHandler(context_PreRequestHandlerExecute);
      void context_PreRequestHandlerExecute(object sender, EventArgs e)
      if (app.Context.Request.Url.AbsolutePath.ToLower().Contains("owssvr.dll"))
                        return;

  • Question about robots.txt

    This isn't something I've usually bothered with, as I always thought you didn't really need one unless you wanted to disallow access to pages / folders on a site.
    However, a client has been reading up on SEO and mentioned that some analytics thing (possibly Google) was reporting that "one came back that the robot.txt file was invalid or missing. I understand this can stop the search engines linking in to the site".
    So I had a rummage, and uploaded what I thought was a standard enough robots.txt file :
    # robots.txt
    User-agent: *
    Disallow:
    Disallow: /cgi-bin/
    But apparently this is reporting :
    The following block of code contains some errors.You specified both a generic path ("/" or empty disallow) and specific paths for this block of code; this could be misinterpreted. Please, remove all the reported errors and check again this robots.txt file.
    Line 1
    # robots.txt
    Line 2
    User-agent: *
    Line 3
    Disallow:
    You specified both a generic path ("/" or empty disallow) and specific paths for this block of code; this could be misinterpreted.
    Line 4
    Disallow: /cgi-bin/
    You specified both a generic path ("/" or empty disallow) and specific paths for this block of code; this could be misinterpreted.
    If anyone could set me straight on how a standard / default robots.txt file should look like, that would be much appreciated.
    Thanks.

    Remove the blank disallow line so it looks like this:
    User-agent: *
    Disallow: /cgi-bin/
    E. Michael Brandt
    www.divahtml.com
    www.divahtml.com/products/scripts_dreamweaver_extensions.php
    Standards-compliant scripts and Dreamweaver Extensions
    www.valleywebdesigns.com/vwd_Vdw.asp
    JustSo PictureWindow
    JustSo PhotoAlbum, et alia

  • [solved]Wget: ignore "disallow wget" +comply to the rest of robots.txt

    Hello!
    I need to wget a few (maybe 20 -.- ) html files that are linked on one html page (same domain) recursively, but the robots.txt there disallows wget. Now I could just ignore the robots.txt... but then my wget would also ignore the info on forbidden links to dynamic sites which are forbidden in the very same robots.txt for good reasons. And I don't want my wget pressing random buttons on that site. Which is what the robots.txt is for. But I can't use the robots.txt with wget.
    Any hints on how to do this (with wget)?
    Last edited by whoops (2014-02-23 17:52:31)

    HalosGhost wrote:Have you tried using it? Or, is there a specific reason you must use wget?
    Only stubborness
    Stupid website -.- what do they even think they achieve by disallowing wget? I should just use the ignore option and let wget "click" on every single button in their php interface. But nooo, instead I waste time trying to figure out a way to exclude those GUI links from being followed even though wget would be perfectly set up to comply to that automatically if it weren't for that one entry to "ban" it. *grml*
    Will definitely try curl next time though - thanks for the suggestion!
    And now, I present...
    THE ULTIMATIVE SOLUTION**:
    sudo sed -i 's/wget/wgot/' /usr/bin/wget
    YAY.
    ./solved!
    ** stubborn version.
    Last edited by whoops (2014-02-23 17:51:19)

  • Web Repository Manager and robots.txt

    Hello,
    I would like to search an intranet site and therefore set up a crawler according to the guide "How to set up a Web Repository and Crawl It for Indexing".
    Everything works fine.
    Now this web site uses a robots.txt as follows:
    <i>User-agent: googlebot
    Disallow: /folder_a/folder_b/
    User-agent: *
    Disallow: /</i>
    So obviously, only google is allowed to crawl (parts of) that web site.
    My question: If I'd like to add the TRex crawler to the robots.txt what's the name of the "User-agent" I have to specify here?
    Maybe the name I defined in the SystemConfiguration > ... > Global Services > Crawler Parameters > Index Management Crawler?
    Thanks in advance,
    Stefan

    Hi Stefan,
    I'm sorry but this is hard coded. I found it in the class : com.sapportals.wcm.repository.manager.web.cache.WebCache
    private HttpRequest createRequest(IResourceContext context, IUriReference ref)
            HttpRequest request = new HttpRequest(ref);
            String userAgent = "SAP-KM/WebRepository 1.2";
            if(sessionWatcher != null)
                String ua = sessionWatcher.getUserAgent();
                if(ua != null)
                    userAgent = ua;
            request.setHeader("User-Agent", userAgent);
            Locale locale = context.getLocale();
            if(locale != null)
                request.setHeader("Accept-Language", locale.getLanguage());
            return request;
    So recompile the component or changing the filter... I would prefer to change the roberts.txt
    hope this helps,
    Axel

  • Robots.txt question?

    I am kind of new to web hosting, but learning.
    I am hosting with just host, I have a couple of sites (addons). I am trying to publish my main site now and there is a whole bunch of stuff in site root folder that I have no idea what it is. I don't want to delete anything and I am probably not going too lol. But should I block a lot of the stuff in there in my Robots.txt file?
    Here is some of the stuff in there:
    .htaccess
    404.shtml
    cgi-bin
    css
    img
    index.php
    justhost.swf
    sifr-addons.js
    sIFR-print.cs
    sIFR-screen.css
    sifr.js
    should I just disallow all of this stuff in my robots.txt? or any recommendations would be appreciated?  Thanks

    Seaside333 wrote:
    public_html for the main site, the other addons are public_html/othersitesname.com
    is this good?
    thanks for quick response
    Probably don't need the following files unless youre using text image-replacement techniques - sifr-addons.js, sIFR-print.cs, sIFr-screen.css, sifr,js
    Good to keep .htaccess - (can insert special instrcutions in this file) - 404.shtml (if a page can't be found on your remote server it goes to this page) - cgi-bin (some processing scripts are placed in this folder)
    Probably you will have your own 'css' folder.  'img' folder not needed. 'index.php' is the homepage of the site and what the browser looks for initially, you can replace it with your own homepage.
    You don't need justhst.swf.
    Download the files/folders to you local machine and keep them in case you need them.

  • Is ROBOT.TXT supported

    The Robots Exclusion Protocol uses the robot.txt configuration file to give
    instructions to Web crawlers (robots) about how to index your pages.
    The functionality is available through a specific META tags in HTML
    documents.
    Is ROBOT.TXT supported with WLS ?
    Bernard DEVILLE

    According to [https://en.wikipedia.org/wiki/Snapdragon_%28system_on_chip%29 Wikipedia it is a Snapdragon 2]. Which would make it a compatible device.
    If you search the Play Store for Firefox do you see a Firefox app to install?

  • Disallow URLs by robots.txt but still Appear In Google Search Results.

    disallow URLs by robots.txt but still Appear In Google Search Results. 

    Can you expand on your problem? Are you being indexed despite not wanting to be indexed?
    You are almost certainly in the wrong forum as this relates to SharePoint search, not how Google indexes your content.

  • Robots.txt - default setup

    Hey!
    Since im using Iweb for creating my websites i know that i have to setup robots.txt for SEO.
    I have made several sites: one for restaurant, one is about photography, one personal etc...
    There is nothing i want to "hide" from google robots on those websites.
    So my question is:
    When we create a website and publish it is there at least a default setup for robots.txt ?
    For example:
    Website is parked in folder: public_html/mywebsitefolder
    Inside mywebsitefolder folder i have:
    /nameofthewebsite
    /cgi-bin
    /index.html
    Structure is same for all websites created with Iweb so what should we by default put in robots.txt ?
    Ofcourse, in case you dont want to hide any of the pages or content. 
    Azz.

    If you don't want to stop the bots crawling any folder - don't bother with one at all.
    The robots.txt should go in the root folder since the crawler looks for....
    http://www.domain-name.com/robots.txt
    If your site files are in a sub folder the robots.txt would be like...
    User-agent: *
    Disallow: /mywebsitefolder/folder-name
    Disallow: /mywebsitefolder/file.file-extension
    To allow all access...
    User-agent: *
    Disallow:
    I suppose you may want to use robots.txt if you want to allow/disallow one particular bot.

  • Robots.txt

    Hi,
    Has anyone created a robots.txt file for an external plumtree portal??
    The company I work for is currently using PT 4.5 SP2 and I'm just wondering what directories I should dissallow that will prevent spiders etc from crawling certain parts of the web site. This will help impove search results on search engines.
    See http://support.microsoft.com/default.aspx?scid=kb;en-us;217103

    The robots.txt file live at the root level of the server where your web pages are. What is the URL of your website?

  • Placement of robots.txt file

    Hi all,
    I want to disallow search robots from indexing certain directories on a MacOS X Server.
    Where would I put the robots.txt file?
    According to the web robots pages at http://www.robotstxt.org/wc/exclusion-admin.html it needs to go in the "top-level of your URL space", which depends on the server and software configuration.
    Quote: "So, you need to provide the "/robots.txt" in the top-level of your URL space. How to do this depends on your particular server software and configuration."
    Quote: "For most servers it means creating a file in your top-level server directory. On a UNIX machine this might be /usr/local/etc/httpd/htdocs/robots.txt".
    On a MacOS X Server would the robots.txt go into the "Library" or "WebServer" directory or somewhere else?
    Thanxx
    monica
    G5   Mac OS X (10.4.8)  

    The default document root for Apache is /Library/WebServer/Documents so your robots.txt file should be at /Library/WebServer/Documents/robots.txt

Maybe you are looking for

  • How can i erase an icloud account in ipad? Dont remember pw of that account and i have a new apple id

    i have no idea on how reset the icloud account since i no longer have access to email that i used in it. even when i try to factory restore the ipad it will ask me for the pw of my previous apple id not the one that i am currently using in the apps s

  • How do you add your contact picture to a text message?

    I want to be able to see my contact pictures whenever I text someone from my contacts list. How do you put your contact pictures on a text message for iPhone 6? I can see my contacts pictures when I send/receive a group text but not an individual tex

  • Read only table in jdeveloper11

    I use a read only af:table in my page.. 1-as the page loads for first time , I want to navigate between rows as using up/down arrow keys. How can I focus on the first row to navigate... 2- if I can make partial trigger script , I think I will be able

  • Adding multiple product details in Mail Forms

    Hi Experts, We have a requirement to print multiple product details in mail forms. I see that in a seperate business role - 'SPL', the option to print product table is given. But there are fixed attributes that can be printed. We need to print custom

  • Unix Executable Files Problem

    My husband moved from PC to Mac and had the Apple Store transfer his libraries to it - including picture files. Every time he opens iPhoto he is told that there are 14 files not yet imported. However, they can't be imported and on inspection turn out