Generate a robots.txt file for your website. Control which pages search engines can and cannot crawl.
Let all search engines crawl everything
Block all search engines from everything
Allow all but block admin and private areas
Standard robots.txt for WordPress sites
The robots.txt file is a text file placed in the root of your website that tells search engine crawlers which pages they are allowed to visit. It is part of the Robots Exclusion Protocol - a standard that well-behaved bots follow when crawling your site.
A properly configured robots.txt file helps search engines crawl your site efficiently by steering them away from unimportant pages like admin panels, login pages, and duplicate content. This can improve your site's crawl budget and help important pages get indexed faster.
robots.txt is a suggestion, not a security measure. Malicious bots may ignore it. Do not rely on robots.txt to hide sensitive content - use authentication instead. Also note that blocking a page in robots.txt does not remove it from search results if other sites link to it.
The robots.txt file must be placed in the root directory of your website at example.com/robots.txt. It will not work in subdirectories. Most hosting control panels let you create or edit it directly.
No. Blocking crawling prevents Google from reading the page content but if other sites link to the page it may still appear in search results as a URL with no description. To remove a page from results entirely use the noindex meta tag or the URL removal tool in Google Search Console.