Integrations
Plug anybrowse into any AI stack. URL to Markdown in one call โ no browser, no config, no headache.
Use anybrowse as LangChain tools โ drop them into any agent, chain, or RAG pipeline. Handles JS-heavy pages, paywalls, and anti-bot measures automatically.
pip install anybrowse[langchain]
from anybrowse.langchain import get_tools
from langchain.agents import initialize_agent, AgentType
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model="gpt-4")
tools = get_tools() # free tier โ or get_tools(api_key="your-key")
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS)
result = agent.run("Research the latest AI agent frameworks and summarize the top 3")
from anybrowse.langchain import AnybrowseLoader
# Use as a document loader
loader = AnybrowseLoader("https://techcrunch.com")
docs = loader.load()
print(docs[0].page_content[:500])
Available tools: anybrowse_scrape, anybrowse_search
Load any URL as a LlamaIndex Document โ ready for indexing, querying, and RAG workflows.
pip install anybrowse[llamaindex]
from anybrowse.llamaindex import AnybrowseReader
reader = AnybrowseReader() # or AnybrowseReader(api_key="your-key")
docs = reader.load_data(["https://techcrunch.com"])
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(docs)
query_engine = index.as_query_engine()
response = query_engine.query("What are the key announcements this week?")
print(response)
Use anybrowse as a CrewAI tool for web-enabled agents.
pip install anybrowse crewai crewai-tools
from crewai_tools import tool
from anybrowse import AnybrowseClient
client = AnybrowseClient() # or AnybrowseClient(api_key="your-key")
@tool("Web Scraper")
def scrape_url(url: str) -> str:
"""Scrape any URL and return clean Markdown content for analysis."""
return client.scrape(url).markdown
@tool("Web Search")
def search_web(query: str) -> str:
"""Search the web and return structured Markdown results."""
results = client.search(query)
return "\n\n".join([f"## {r.title}\n{r.url}" for r in results])
# Use in a crew
from crewai import Agent, Task, Crew
researcher = Agent(
role="Research Analyst",
goal="Find and synthesize information from the web",
backstory="Expert at web research and competitive intelligence",
tools=[scrape_url, search_web]
)
Add anybrowse as a function tool for AutoGen agents.
pip install anybrowse pyautogen
from anybrowse import AnybrowseClient
import autogen
client = AnybrowseClient()
def scrape_url(url: str) -> str:
"""Scrape any URL and return Markdown."""
return client.scrape(url).markdown
def search_web(query: str) -> str:
"""Search the web. Returns Markdown with results."""
results = client.search(query)
return "\n\n".join([f"{r.title}: {r.url}" for r in results])
# Configure and use in an agent
config = autogen.config_list_from_json("OAI_CONFIG_LIST")
assistant = autogen.AssistantAgent("assistant", llm_config={"config_list": config})
user = autogen.UserProxyAgent(
"user",
human_input_mode="NEVER",
function_map={"scrape_url": scrape_url, "search_web": search_web}
)
Use the HTTP Request node to call anybrowse from any n8n workflow.
- Add an HTTP Request node to your workflow
- Set Method to POST and URL to
https://anybrowse.dev/scrape - Set Body Type to JSON
- Set the request body:
{ "url": "{{ $json.url }}" } - Use
{{ $json.markdown }}from the response in your next node
With API key: add header Authorization: Bearer your-api-key
Use Webhooks by Zapier to call anybrowse from any Zap.
- Add a Webhooks by Zapier โ POST action
- URL:
https://anybrowse.dev/scrape - Payload Type: JSON
- Data:
urlโ map from your trigger - Map the
markdownfield from the response to your next action (Google Docs, Notion, Slack, etc.)
Use the HTTP โ Make a request module to call anybrowse in any scenario.
- Add an HTTP โ Make a request module
- URL:
https://anybrowse.dev/scrape - Method: POST
- Body type: Raw ยท Content type: application/json
- Request content:
{ "url": "{{url}}" } - Parse the response and use
markdowndownstream
No SDK needed. Works from any language or tool that can make HTTP requests.
# Scrape a URL
curl -X POST https://anybrowse.dev/scrape \
-H "Content-Type: application/json" \
-d '{"url": "https://techcrunch.com"}'
# Web search
curl -X POST https://anybrowse.dev/serp/search \
-H "Content-Type: application/json" \
-d '{"q": "web scraping API 2025", "count": 5}'
# With API key (Pro)
curl -X POST https://anybrowse.dev/scrape \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{"url": "https://techcrunch.com"}'