anybrowse
First scrape in 60 seconds
No signup. No API key. Pick your setup:

Add to your MCP config and ask your agent to scrape anything:

MCP config (all clients)
{
  "mcpServers": {
    "anybrowse": {
      "type": "streamable-http",
      "url": "https://anybrowse.dev/mcp"
    }
  }
}

Then ask your agent:

Natural language prompt
Scrape https://techcrunch.com and summarize the top stories

Where to put this config:

ClientConfig location
Claude Code~/.claude/claude_desktop_config.json → mcpServers
CursorSettings → MCP → Add Server
Windsurf~/.codeium/windsurf/mcp_config.json → mcpServers
Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json
curl
curl -X POST https://anybrowse.dev/scrape \
  -H "Content-Type: application/json" \
  -d '{"url": "https://techcrunch.com"}'
Response
# TechCrunch
## Top Stories

Latest news from the world of technology...
(returns full page as clean Markdown)
Python
import requests

r = requests.post("https://anybrowse.dev/scrape",
                  json={"url": "https://techcrunch.com"})
print(r.json()["markdown"])
Returns JSON for REST, plain text for MCP

REST API returns {"markdown": "..."}. MCP tools return plain markdown strings directly.

Try it live


WHAT HAPPENS NEXT
Free
$0
10 scrapes/day. No API key. No signup. Resets at midnight UTC.
Pay-per-use
$0.002
Per scrape via x402 micropayments. USDC on Base. No subscription. Set up

When you hit the free limit, you get a 402 with an upgrade link in the response. Rate limits reset daily at 00:00 UTC.

MCP SETUP

Claude Code

~/.claude/claude_desktop_config.json
{
  "mcpServers": {
    "anybrowse": {
      "type": "streamable-http",
      "url": "https://anybrowse.dev/mcp"
    }
  }
}

Or use a project-local .mcp.json file in your repo root with the same structure.

Cursor

Go to Settings → MCP → Add Server and paste:

Cursor MCP config
{
  "mcpServers": {
    "anybrowse": {
      "type": "streamable-http",
      "url": "https://anybrowse.dev/mcp"
    }
  }
}

Windsurf

Go to Settings → Model Context Protocol or edit directly:

~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "anybrowse": {
      "type": "streamable-http",
      "url": "https://anybrowse.dev/mcp"
    }
  }
}

Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "anybrowse": {
      "type": "streamable-http",
      "url": "https://anybrowse.dev/mcp"
    }
  }
}
No key needed

Free tier works out of the box. For Pro, add "apiKey": "ab_your_key" to the server config.

MCP TOOLS & API REFERENCE
POST/scrapeMCP tool: scrape

Convert any URL to clean, LLM-ready Markdown. 84% scrape success rate — works on JavaScript-heavy sites, Cloudflare-protected pages, and government sites. Uses real Chrome with stealth mode, automatic CAPTCHA solving, and proxy fallback.

ParamTypeDescription
url requiredstringURL to scrape (http/https)
slow optionalbooleanWait extra time for JS-heavy sites (default: false)
waitFor optionalnumberCustom JS wait in ms (default: 5000)
Example
curl -X POST https://anybrowse.dev/scrape \
  -H "Content-Type: application/json" \
  -d '{"url": "https://news.ycombinator.com"}'

# Returns: plain Markdown text of the page
POST/crawlMCP tool: crawl

Recursively crawl a site and return multiple pages as Markdown. Ideal for docs sites and product catalogs.

ParamTypeDescription
url requiredstringStarting URL
maxPages optionalnumberMax pages to crawl (default: 10, max: 100)
maxDepth optionalnumberLink depth from start URL (default: 2)
sameDomain optionalbooleanOnly follow same-domain links (default: true)
Response (200 OK)
{
  "pages": [{ "url": "https://docs.example.com", "title": "Introduction", "markdown": "# Introduction..." }],
  "crawled": 5, "skipped": 1
}
POST/batchMCP tool: batch_scrape

Scrape up to 10 URLs in parallel in a single call. More efficient than looping /scrape.

ParamTypeDescription
urls requiredstring[]Array of URLs (max 10)
Example
curl -X POST https://anybrowse.dev/batch \
  -H "Content-Type: application/json" \
  -d '{"urls": ["https://techcrunch.com", "https://theverge.com", "https://wired.com"]}'

# Response: { "results": [{ "url": "...", "success": true, "markdown": "...", "title": "..." }] }
POST/extractMCP tool: extract

Extract structured JSON from any page using a field schema. Returns typed data alongside raw Markdown.

ParamTypeDescription
url requiredstringURL to extract from
schema requiredobjectField names mapped to types: string, number, boolean, array
Example
curl -X POST https://anybrowse.dev/extract \
  -H "Content-Type: application/json" \
  -d '{"url": "https://shop.example.com/product/1",
       "schema": {"price": "number", "title": "string", "in_stock": "boolean"}}'

# Response: { "data": { "price": 29.99, "title": "Widget Pro", "in_stock": true }, "markdown": "..." }
ERROR REFERENCE
StatusErrorFix
402Daily limit reachedUpgrade to Pro at /checkout or use x402 pay-per-use
402Payment verification failedCheck wallet balance; use the x402-fetch library (see below)
200Empty or minimal contentSite may require more JS time. Add "slow": true to request
451BLOCKEDSite blocked scraping after proxy fallback. Try a different URL or contact us
429TOO_MANY_REQUESTSCheck the Retry-After header and wait before retrying
504TIMEOUTPage took over 30s to load. Try with a shorter timeout or skip
500INTERNAL_ERRORRetry once; if persistent the URL may be permanently blocked
400BAD_REQUESTCheck that url starts with http:// or https://
Error response shape
{ "error": "Human-readable message", "reason": "REASON_CODE", "upgrade": "https://anybrowse.dev/checkout" }
x402 MICROPAYMENTS

Pay $0.002/scrape in USDC on Base. No subscription, no account needed. The x402-fetch library handles payment automatically.

Install
npm install x402-fetch
JavaScript / TypeScript
import { x402Fetch } from 'x402-fetch';

const client = x402Fetch({
  wallet: { privateKey: process.env.WALLET_PRIVATE_KEY, network: 'base' },
  facilitator: 'coinbase'
});

const res = await client('https://anybrowse.dev/scrape', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ url: 'https://techcrunch.com' })
});
const markdown = await res.text();
Test for free on Base Sepolia

Use network: 'base-sepolia' and get test USDC from the Circle faucet.

RATE LIMITS
EndpointFreeProx402
/scrape10/day10,000/monthUnlimited (pay per call)
/crawl10/day10,000/monthUnlimited
/batch5/dayUnlimitedUnlimited
/serp/search10/day10,000/monthUnlimited
/mcp (connect)FreeFreeFree

Limits reset at midnight UTC. Rate limit responses include a Retry-After header in seconds.