Skip to content

Robots.txt Generator

Build robots.txt visually with one-click AI bot blocking. Block GPTBot, ClaudeBot, Google-Extended, and more AI crawlers. Add user-agents, paths, sitemap, and crawl-delay.

FreeNo SignupNo Server UploadsZero Tracking

AI Bot Blocking

0/8 blocked
GPTBot (OpenAI)
ChatGPT, GPT training
Allowed
ClaudeBot (Anthropic)
Claude AI training
Allowed
Google-Extended
Gemini, AI training
Allowed
CCBot (Common Crawl)
Used by many AI models
Allowed
Bytespider (ByteDance)
TikTok, AI training
Allowed
PerplexityBot
Perplexity AI
Allowed
Applebot-Extended
Apple AI features
Allowed
FacebookBot (Meta)
Meta AI training
Allowed

Crawl Rules

User-agent:

robots.txt Preview

User-agent: *
Allow: /
 
Export

How to Use Robots.txt Generator

  1. 1

    Block AI bots

    Toggle the AI bots you want to block in the highlighted section. Click Block All to block all known AI crawlers at once.

  2. 2

    Configure crawl rules

    Set up User-agent rules with Allow or Disallow directives for specific paths. Use the dropdown to select common user agents.

  3. 3

    Add sitemap and options

    Enter your sitemap URL and optionally set a crawl-delay value for rate limiting.

  4. 4

    Copy your robots.txt

    Review the live preview and click Copy to get your complete robots.txt file content.

Frequently Asked Questions

Robots.txt is a text file placed at the root of your website (example.com/robots.txt) that tells search engine crawlers which pages they can and cannot access. It's part of the Robots Exclusion Protocol.

Yes. Major AI companies have agreed to respect robots.txt directives for their bots. By adding Disallow rules for GPTBot, ClaudeBot, and others, you can prevent them from crawling your site for AI training data.

Common AI crawlers include GPTBot (OpenAI), ClaudeBot (Anthropic), Google-Extended (Google Gemini), CCBot (Common Crawl), Bytespider (ByteDance), PerplexityBot, Applebot-Extended, and FacebookBot. Our tool lets you block any or all with one click.

Yes. Instead of blocking all paths with 'Disallow: /', you can block specific directories like 'Disallow: /private/' while allowing the rest of your site.

Place the robots.txt file at the root of your domain: https://yourdomain.com/robots.txt. It must be accessible at that exact URL for crawlers to find it.