agents.txt
Capability declaration for AI agents on the web
agents.txt
Declare what AI agents can do on a website. Publish a machine-readable capability surface so agents can discover available actions, auth requirements, rate limits, and per-agent policy before making requests.
- Formats
- /.well-known/agents.txt
- /.well-known/agents.json
- Specification
- Version 1.0 in the repository.
- Protocol support
- REST, MCP, A2A, GraphQL, WebSocket
- Repository contents
- Spec, packages, tests, examples, generator, and registration documents.
Overview
A declaration can describe site identity, capability blocks, methods, protocols, authentication requirements, parameters, access rules, and agent-specific policy. The JSON companion carries the same information in a structured format for direct consumption by clients.
- Capabilities declare endpoint, protocol, method, auth, rate limit, and parameters.
- Agent blocks override policy for `*` or named agents.
- Discovery is separate from enforcement. Servers still need independent checks.
Minimal declaration
A static file is enough. Packages are optional.
# agents.txt
Spec-Version: 1.0
Site-Name: My Store
Site-URL: https://example.com
Capability: product-search
Endpoint: https://example.com/api/search
Method: GET
Protocol: REST
Auth: none
Rate-Limit: 60/minuteWhat it can declare
Site identity, capability blocks, methods, protocols, auth type, auth endpoint, auth docs, parameters, scopes, access rules, and per-agent policies.
What should also be served
The JSON companion, correct content types, discovery-friendly CORS, and cache headers.
What it is not
It is not an enforcement layer and not a protocol by itself. It is the discovery layer above those concerns.
Relationship to other files
agents.txt has a narrow role: capability discovery for websites. It complements adjacent standards rather than replacing them.
| Standard | Purpose | Relationship |
|---|---|---|
| robots.txt | Deny crawling | Restricts what automated systems should not access. |
| llms.txt | Guide reading | Publishes content for models to read. |
| agents.txt | Declare actions | Website-level capability discovery for agents. |
Packages
- @agents-txt/core
- Parser, generator, validator, schema, and HTTP discovery client for the text and JSON formats.
- @agents-txt/express
- Express middleware that serves both files with cache-control, CORS, security headers, and an optional in-process rate limiter.
- @agents-txt/mcp
- MCP bridge that discovers a compliant site and turns declared REST capabilities into MCP tools.
Examples
The repository includes static templates for ecommerce, blogs, and SaaS, a basic Express app, an MCP bridge, a live demo, and a browser-based generator.
Entry points
Use middleware if you want the files served for you, or publish static files directly if that is all you need.
npm install @agents-txt/express
npx @agents-txt/mcp https://example.comPractical notes
Use HTTPS in production. Do not put API keys, tokens, or other secrets in the declaration.
Serve cache headers and discovery-friendly CORS. The Express package uses `max-age=300`.
The spec includes platform declarations and bidirectional agent declarations via `Declaration-Type`, `Operates-On`, and `Agent-Declaration`.
Declaration is not enforcement
Servers still need independent authz and rate-limit enforcement. The file describes the contract; it is not the trust boundary.
Identity is still weak by default
Per-agent policy commonly relies on `User-Agent`. That is useful, but it is not strong identity on its own.
Conformance is still maturing
The repo includes parser, validator, examples, and tests, but interoperability still depends on multiple clients implementing the same rules consistently.