Want to verify if your robots.txt rules are correct? Enter your website URL to fetch it, or paste the rules directly. We will test the crawling status of your URLs in real-time.
You can enter a website homepage URL and we will attempt to fetch its robots.txt, or you can manually input your rules directly into the right-side code editor.
You can choose a specific search engine User-Agent (like Googlebot, Bingbot, etc.) to simulate crawling behavior.
In the input box below, enter the full URL you want to test.
The tool will check in real-time if the URL is blocked by the current robots.txt rules and will highlight the line that caused the block.
robots.txt is a text file at the root of your site that tells search engine crawlers which pages they can and cannot access.
Being 'Allowed' is just a prerequisite. Indexing also depends on page quality, internal linking, and whether you use a 'noindex' tag.
Crawlers first look for a block that exactly matches their User-Agent name. If not found, they fall back to the generic rule block (User-agent: *).