Bot policy

The agents that readyour site, in the open.

Polite, identifiable, and only fetching pages on request. If you're a site owner who found one of our user agents in your logs, this page explains exactly what it is and how to turn it off.

Torch Compass bot policy

01. Who sends it

A customer of Torch Compass asked us to audit their site.

Torch Compass agents only run when a paying customer adds a site to their account and asks us to audit it. We never scan the open web. We never run speculative crawls looking for sites to pitch. We never resell the pages we fetch.

If you’re seeing one of our user agents in your logs, it’s because somebody with access to your site (or someone who told us they had access) requested the audit. If that wasn’t you and you’d like us to stop, the block instructions are further down this page, and the contact section covers the rest.

02. What it fetches

HTML, robots.txt, sitemap.xml, llms.txt, humans.txt.

The auditor fetches HTML pages on your origin, plus the four standard discovery files if they exist: robots.txt, sitemap.xml, llms.txt, and humans.txt. It walks links two clicks deep from the entry point, caps the total at 200 pages per crawl, and stays on the same domain. Subdomains are treated as separate sites and require a separate request.

We honor Crawl-delay directives and back off aggressively on non-2xx responses. No headless browser, no JavaScript execution, no form submissions, no authenticated requests. Each agent reads what a search engine or an LLM crawler would read, and nothing else.

03. User agent convention

Each agent advertises its own functional name.

Every request carries the functional name of the agent that triggered it, with a link to this page in the comment slot so it’s traceable. Today there is one live crawler:

SEO-Auditor/0.1 (+https://torchcompass.com/bot)

As the roster expands, each new agent will advertise under its own functional name without a brand prefix. That means future crawlers such as Content-Strategist and Growth-Analyst will show up with their own product tokens. You can allow or block each one independently. If you’re writing a detection rule, match on the product token (the part before the slash) rather than the full string. That way your rule keeps working after a version bump.

04. How to block us

A two-line robots.txt rule per agent, and we stop.

Add these two lines to your robots.txt and the SEO Auditor will stop on its next request:

User-agent: SEO-Auditor
Disallow: /

Each agent is a separate User-agent rule. Blocking the auditor does not block a future Content-Strategist crawl, because the intent behind the two is different. If you want a global block covering any Torch Compass agent today and in the future, use a wildcard:

User-agent: *-Auditor
User-agent: *-Strategist
Disallow: /

We respect robots.txt. Always. If our crawler sees a Disallow rule that matches the requested path, it records the block in the crawl log and moves on. A customer who pointed us at a blocked site will see the block in their audit results, so you don’t need to explain yourself to them; the logs will.

05. Contact

A human on the team reads the inbox.

If you have a concern about any of our crawlers, a question about what we fetched, or a request we haven’t covered here, write to us at bot@torchcompass.com. Include the timestamp from your logs and the hostname, and we’ll trace the request back to the customer account that triggered it.