Add to your MCP config and ask your agent to scrape anything:
{
"mcpServers": {
"anybrowse": {
"type": "streamable-http",
"url": "https://anybrowse.dev/mcp"
}
}
}Then ask your agent:
Scrape https://techcrunch.com and summarize the top stories
Where to put this config:
| Client | Config location |
|---|---|
| Claude Code | ~/.claude/claude_desktop_config.json → mcpServers |
| Cursor | Settings → MCP → Add Server |
| Windsurf | ~/.codeium/windsurf/mcp_config.json → mcpServers |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
curl -X POST https://anybrowse.dev/scrape \
-H "Content-Type: application/json" \
-d '{"url": "https://techcrunch.com"}'# TechCrunch ## Top Stories Latest news from the world of technology... (returns full page as clean Markdown)
import requests
r = requests.post("https://anybrowse.dev/scrape",
json={"url": "https://techcrunch.com"})
print(r.json()["markdown"])REST API returns {"markdown": "..."}. MCP tools return plain markdown strings directly.
Try it live
When you hit the free limit, you get a 402 with an upgrade link in the response. Rate limits reset daily at 00:00 UTC.
Claude Code
{
"mcpServers": {
"anybrowse": {
"type": "streamable-http",
"url": "https://anybrowse.dev/mcp"
}
}
}Or use a project-local .mcp.json file in your repo root with the same structure.
Cursor
Go to Settings → MCP → Add Server and paste:
{
"mcpServers": {
"anybrowse": {
"type": "streamable-http",
"url": "https://anybrowse.dev/mcp"
}
}
}Windsurf
Go to Settings → Model Context Protocol or edit directly:
{
"mcpServers": {
"anybrowse": {
"type": "streamable-http",
"url": "https://anybrowse.dev/mcp"
}
}
}Claude Desktop
{
"mcpServers": {
"anybrowse": {
"type": "streamable-http",
"url": "https://anybrowse.dev/mcp"
}
}
}Free tier works out of the box. For Pro, add "apiKey": "ab_your_key" to the server config.
scrapeConvert any URL to clean, LLM-ready Markdown. 84% scrape success rate — works on JavaScript-heavy sites, Cloudflare-protected pages, and government sites. Uses real Chrome with stealth mode, automatic CAPTCHA solving, and proxy fallback.
| Param | Type | Description |
|---|---|---|
url required | string | URL to scrape (http/https) |
slow optional | boolean | Wait extra time for JS-heavy sites (default: false) |
waitFor optional | number | Custom JS wait in ms (default: 5000) |
curl -X POST https://anybrowse.dev/scrape \
-H "Content-Type: application/json" \
-d '{"url": "https://news.ycombinator.com"}'
# Returns: plain Markdown text of the pagecrawlRecursively crawl a site and return multiple pages as Markdown. Ideal for docs sites and product catalogs.
| Param | Type | Description |
|---|---|---|
url required | string | Starting URL |
maxPages optional | number | Max pages to crawl (default: 10, max: 100) |
maxDepth optional | number | Link depth from start URL (default: 2) |
sameDomain optional | boolean | Only follow same-domain links (default: true) |
{
"pages": [{ "url": "https://docs.example.com", "title": "Introduction", "markdown": "# Introduction..." }],
"crawled": 5, "skipped": 1
}batch_scrapeScrape up to 10 URLs in parallel in a single call. More efficient than looping /scrape.
| Param | Type | Description |
|---|---|---|
urls required | string[] | Array of URLs (max 10) |
curl -X POST https://anybrowse.dev/batch \
-H "Content-Type: application/json" \
-d '{"urls": ["https://techcrunch.com", "https://theverge.com", "https://wired.com"]}'
# Response: { "results": [{ "url": "...", "success": true, "markdown": "...", "title": "..." }] }extractExtract structured JSON from any page using a field schema. Returns typed data alongside raw Markdown.
| Param | Type | Description |
|---|---|---|
url required | string | URL to extract from |
schema required | object | Field names mapped to types: string, number, boolean, array |
curl -X POST https://anybrowse.dev/extract \
-H "Content-Type: application/json" \
-d '{"url": "https://shop.example.com/product/1",
"schema": {"price": "number", "title": "string", "in_stock": "boolean"}}'
# Response: { "data": { "price": 29.99, "title": "Widget Pro", "in_stock": true }, "markdown": "..." }searchReal-time web search via Brave Search API — fast, reliable, no rate limits. Returns results as structured JSON. Use before scraping to find the right pages.
| Param | Type | Description |
|---|---|---|
query required | string | Search query |
num optional | number | Number of results (default: 10) |
curl -X POST https://anybrowse.dev/serp/search \
-H "Content-Type: application/json" \
-d '{"query": "best MCP servers 2025", "num": 5}'
# Response: { "results": [{ "title": "...", "url": "...", "snippet": "..." }] }| Status | Error | Fix |
|---|---|---|
| 402 | Daily limit reached | Upgrade to Pro at /checkout or use x402 pay-per-use |
| 402 | Payment verification failed | Check wallet balance; use the x402-fetch library (see below) |
| 200 | Empty or minimal content | Site may require more JS time. Add "slow": true to request |
| 451 | BLOCKED | Site blocked scraping after proxy fallback. Try a different URL or contact us |
| 429 | TOO_MANY_REQUESTS | Check the Retry-After header and wait before retrying |
| 504 | TIMEOUT | Page took over 30s to load. Try with a shorter timeout or skip |
| 500 | INTERNAL_ERROR | Retry once; if persistent the URL may be permanently blocked |
| 400 | BAD_REQUEST | Check that url starts with http:// or https:// |
{ "error": "Human-readable message", "reason": "REASON_CODE", "upgrade": "https://anybrowse.dev/checkout" }Pay $0.002/scrape in USDC on Base. No subscription, no account needed. The x402-fetch library handles payment automatically.
npm install x402-fetch
import { x402Fetch } from 'x402-fetch';
const client = x402Fetch({
wallet: { privateKey: process.env.WALLET_PRIVATE_KEY, network: 'base' },
facilitator: 'coinbase'
});
const res = await client('https://anybrowse.dev/scrape', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ url: 'https://techcrunch.com' })
});
const markdown = await res.text();Use network: 'base-sepolia' and get test USDC from the Circle faucet.
| Endpoint | Free | Pro | x402 |
|---|---|---|---|
/scrape | 10/day | 10,000/month | Unlimited (pay per call) |
/crawl | 10/day | 10,000/month | Unlimited |
/batch | 5/day | Unlimited | Unlimited |
/serp/search | 10/day | 10,000/month | Unlimited |
/mcp (connect) | Free | Free | Free |
Limits reset at midnight UTC. Rate limit responses include a Retry-After header in seconds.