Build a robots.txt file to control how search engine crawlers access your site. Configure user-agents, crawl rules, and sitemap location.
A robots.txt file tells search engine crawlers which pages on your site they can and cannot access.
Yes. Incorrectly blocking important pages can prevent them from being indexed. Always test before deploying.
Yes, our generator includes a field to add your sitemap URL, which is recommended for SEO best practices.
The robots.txt file must be placed in the root directory of your website so it is accessible at yourdomain.com/robots.txt. Search engine crawlers will only look for it at that exact location, so placing it in a subdirectory will have no effect.
No, Googlebot ignores the Crawl-delay directive in robots.txt. However, Bing, Yandex, and some other crawlers do honor it. To control Google's crawl rate, use the crawl rate settings in Google Search Console instead.