# Serge — Full Documentation > Agent readiness platform. Scans APIs and scores how ready they are for AI agents. > For a quick overview, see [llms.txt](https://serge.ai/llms.txt). ## Overview Serge is the agent readiness platform. It scans any domain and measures how visible, understandable, and integrable the product is for AI agents. The result is a score out of 100, broken down across 6 layers with 35 individual checks. AI agents are becoming a primary distribution channel for SaaS products. If an agent may not be able to discover your product, understand your API, or authenticate programmatically, you could miss integration opportunities. Serge measures this gap and tells you exactly what to fix. ## The Agent Readiness Stack The Agent Readiness Stack is a 6-layer framework that defines what "agent ready" means. Every platform is scored across all 6 layers. ### Layer 1: Discovery — Can agents find you? Can an AI agent discover that your product exists and what it does? Checks: - llms.txt presence and quality — Does a /llms.txt file exist with sufficient content describing the product and its API? - Agent card — Does /.well-known/agent.json exist with name, description, and capabilities? - JSON-LD structured data — Does the homepage contain Organization, Product, or WebApplication schema? - Sitemap with API references — Does /sitemap.xml exist and reference developer/API documentation pages? - robots.txt AI-friendly — Does robots.txt allow AI agent crawlers (GPTBot, ClaudeBot, etc.)? - Developer hub discoverability — Is there a /docs, /developers, or /api page accessible from the homepage? ### Layer 2: Schema — Can agents understand your API? Once found, can an agent understand the API contract? Checks: - OpenAPI spec exists — Is there a valid OpenAPI/Swagger specification at standard paths? - Schema completeness — What percentage of endpoints have typed request/response schemas? - Error schema documentation — Are 4xx/5xx error responses documented with schemas? - Rate limit documentation — Are rate limits documented in the API spec? - Pagination patterns — Are pagination patterns (cursor, offset, page) documented? - Examples and descriptions — Do endpoints have descriptions and example values? ### Layer 3: Protocol — Can agents interact? Does the product support modern agent interaction protocols? Checks: - MCP server available — Is a Model Context Protocol server registered in directories? - A2A card — Does the agent card include capabilities and supported protocols? - SDKs available — Are SDKs available in major programming languages? - Webhooks documented — Are webhook events documented in the API spec? - API versioning — Is there a clear API versioning strategy? - Capability advertisement — How many channels advertise capabilities (llms.txt, OpenAPI, agent card, JSON-LD, MCP)? ### Layer 4: Auth — Can agents authenticate? Can an agent authenticate and operate with appropriate permissions? Checks: - OAuth 2.0 M2M flow — Is client credentials flow supported for machine-to-machine auth? - API key management — Can API keys be created, rotated, and revoked programmatically? - Scoped permissions — Are tokens scoped with least-privilege access? - Token refresh — Are token refresh mechanics documented? - Service account support — Can service accounts operate without human intervention? ### Layer 5: Behavior — Is the API agent-friendly? Does the API behave in ways agents can handle? Checks: - Idempotent operations — Are write operations idempotent? - Structured errors — Do errors follow RFC 7807 or a consistent schema? - Rate limit headers — Are X-RateLimit-* headers present in responses? - Retry-After headers — Do 429/503 responses include Retry-After? - Pagination consistency — Is pagination consistent across all list endpoints? - Deterministic responses — Are responses deterministic for the same input? ### Layer 6: Pricing — Is there agent-friendly pricing? Can an agent evaluate and commit to pricing? Checks: - Machine-readable pricing — Is pricing available in a structured, parseable format? - Usage-based tiers — Are usage tiers clearly defined with limits? - API-specific pricing — Is there a dedicated API pricing page? - Free tier available — Is there a free tier or sandbox for evaluation? - Cost estimation — Are there endpoints or documentation for estimating API usage costs? ## Scoring methodology Each check produces a status: pass (100%), warn (50%), or fail (0%). Layer scores are the average of their checks, weighted equally. The overall score is the weighted average of all 6 layer scores, weighted equally. Score interpretation: - 85-100: Agent ready — well-positioned for discovery and integration - 65-84: Well-positioned — most checks passing with room to improve - 45-64: Making progress — agents can interact for basic tasks - 25-44: Room to grow — agents can discover you but may hit gaps - 0-24: Just getting started — most agent-readable signals are missing ## API reference ### POST /api/scan Initiate a domain scan. Returns a Server-Sent Events stream with real-time scan progress. Request: ```json { "domain": "stripe.com" } ``` The domain must be a valid hostname (e.g., "stripe.com", "api.example.com"). No protocol prefix. Response: Server-Sent Events stream with the following event types: - `status` — Scan status update. Data includes current phase description. - `crawl` — Crawl progress update. Data includes URL being crawled and status. - `layer` — A layer scan completed. Data includes layer number, name, score, and individual check results. - `complete` — Scan finished. Data includes scan ID, domain, overall score, and per-layer scores. - `error` — Scan failed. Data includes error message. Authentication (optional): - Pass a secret key via `Authorization: Bearer sk_serge_...` header - Anonymous requests: 10 scans per hour per IP address - Authenticated requests: 60 scans per hour per workspace - Create secret keys at https://serge.ai/member/api-keys (free account required) - 5 scans per hour per domain (shared across tiers) - 429 responses include Retry-After header ### GET /api/scan/{id} Retrieve results for a completed scan. Path parameters: - id (required): UUID of the scan Response: ```json { "id": "550e8400-e29b-41d4-a716-446655440000", "domain": "stripe.com", "overallScore": 72, "layerScores": { "discovery": 83, "schema": 90, "protocol": 65, "auth": 60, "behavior": 55, "pricing": 40 }, "checkResults": [ { "layer": 1, "key": "llms_txt", "status": "pass", "message": "llms.txt found with 15 lines and API documentation references", "fix": null, "evidence": { "lineCount": 15, "hasApiMentions": true } } ], "createdAt": "2026-03-24T10:00:00Z", "lastSeenAt": "2026-03-24T10:00:00Z", "seenCount": 1 } ``` Cache: Responses are cached for 1 hour (Cache-Control: public, max-age=3600). Error responses: - 400: Invalid scan ID format - 404: Scan not found ## Example usage ### Scan a domain with curl (anonymous) ```bash curl -X POST https://serge.ai/api/scan \ -H "Content-Type: application/json" \ -d '{"domain": "stripe.com"}' ``` ### Scan with authentication (60 scans/hr) ```bash curl -X POST https://serge.ai/api/scan \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk_serge_your_key_here" \ -d '{"domain": "stripe.com"}' ``` ### Retrieve scan results ```bash curl https://serge.ai/api/scan/550e8400-e29b-41d4-a716-446655440000 ``` ### JavaScript example ```javascript // Initiate a scan via SSE (add Authorization header for 60 scans/hr) const response = await fetch('https://serge.ai/api/scan', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer sk_serge_your_key_here', }, body: JSON.stringify({ domain: 'stripe.com' }), }) const reader = response.body.getReader() const decoder = new TextDecoder() while (true) { const { done, value } = await reader.read() if (done) break const text = decoder.decode(value) // Parse SSE events console.log(text) } ``` ## Webhook events Serge supports webhook notifications for monitored domains: - `score.changed` — Fires when a monitored domain's agent readiness score changes. Includes old score, new score, and domain. Webhook payloads are JSON with the following structure: ```json { "event": "score.changed", "domain": "example.com", "oldScore": 31, "newScore": 45, "timestamp": "2026-03-24T06:00:00Z" } ``` ## SergeBot (crawler) SergeBot is the crawler that powers the Serge scanner. It runs only when a user initiates a scan — it is not an autonomous crawler. User-Agent: `SergeBot/1.0 (+https://serge.ai/bot; agent-readiness-scanner)` What it does: - Checks for agent-readiness files: llms.txt, openapi.json, agent.json, sitemap.xml - Parses HTML for structured data (JSON-LD, meta descriptions) - Queries public registries (MCP Registry, PulseMCP, npm) — no crawling involved What it does NOT do: - Scrape, index, or store page content - Follow links or crawl entire sites - Train AI models on your content - Access authenticated or private pages Rate limits: ~20 requests per domain per scan, 6 concurrent, 8-second timeout per request. Respects robots.txt. How to allow: ``` User-agent: SergeBot Allow: / ``` How to block: ``` User-agent: SergeBot Disallow: / ``` Full documentation: https://serge.ai/bot Contact: bot@serge.ai ## Serge MCP (Claude Desktop extension) Serge MCP is an MCP server for Claude Desktop that benchmarks websites for AI agent accessibility. When Claude browses a site through Serge MCP, every action is captured: how long pages take to load, whether buttons and form fields are discoverable, where navigation breaks down, and why. After a session, Serge generates a detailed HTML report with a step-by-step timeline, screenshots, and actionable findings. No API keys, no separate accounts, no billing — Serge MCP works on your existing Claude subscription. ### Tools - `serge_start_session` — Begins a benchmarking session. Requires a target domain and a task description. - `serge_navigate` — Opens a URL in the browser. Returns the page title and accessibility tree. - `serge_read_page` — Returns the current page's accessibility tree. - `serge_click` — Clicks an element identified by its accessibility role and name. - `serge_type` — Types text into a form field identified by its accessibility role and name. - `serge_scroll` — Scrolls the page up or down. - `serge_screenshot` — Takes a screenshot of the current page and returns it to Claude as an image. - `serge_end_session` — Ends the session and generates the report. ### Installation Add Serge MCP to your Claude Desktop config: macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` Windows: `%APPDATA%\Claude\claude_desktop_config.json` ```json { "mcpServers": { "serge": { "command": "npx", "args": ["-y", "@serge-ai/mcp-server"] } } } ``` Restart Claude Desktop. On first use, Serge MCP automatically installs the browser it needs. ### Example usage Open Claude Desktop and type: ``` Use Serge to find the cheapest wireless headphones on digitec.ch ``` ### Common findings - Missing ARIA labels — elements exist visually but lack semantic markup for AI discovery - Bot detection blocked — CDN served a challenge page instead of actual content - Element not found — element doesn't exist in the accessibility tree (JS-rendered, iframe, non-standard component) - Slow page load — page took more than 3 seconds to reach a usable state - Iframe content inaccessible — forms/content in iframes invisible to accessibility tree - Duplicate accessible names — multiple elements share the same role and name ### Requirements Node.js 18 or later. Claude Desktop with an active Claude Pro or Max subscription. macOS or Windows. Full documentation: https://serge.ai/docs/mcp-server GitHub: https://github.com/SuperstellarLLC/serge-mcp-server ## Company Serge is built by Superstellar LLC. - Website: https://serge.ai - API documentation: https://serge.ai/docs - MCP server documentation: https://serge.ai/docs/mcp-server - OpenAPI specification: https://serge.ai/openapi.json - Bot documentation: https://serge.ai/bot - Contact: api@serge.ai