If you’re developing with AI, Firecrawl offers several resources to improve your experience. Firecrawl ships with skills — self-contained knowledge packs that AI coding agents discover and use automatically. One install command gives agents CLI tools for live web work and build skills for integrating Firecrawl into application code. Agents like Claude Code, Cursor, Antigravity, and OpenCode can self-onboard with a single command — no human setup required after the API key exists.Documentation Index
Fetch the complete documentation index at: https://firecrawl-mog-monitoring-docs.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- Prerequisite: Create an API Key
- Skills + CLI
- Using Firecrawl as a Tool
- Firecrawl MCP Server
- Firecrawl Docs for Agents
- Quick Start Guides
- Agent Harnesses
- SDKs
Prerequisite: Create an API Key
Currently, we require a human to create a Firecrawl account. Once you have an account, you’ll need to create an API key. With an API key, your agent can handle the rest — installing the skills, authenticating the CLI, wiring up MCP, and making calls on your behalf.Get your API key
Sign up and grab an API key to start using Firecrawl.
Skills + CLI
The Firecrawl CLI lets your agent search, scrape, interact, crawl, map, extract, and run agent jobs from the terminal. It’s built for humans, AI agents, and CI/CD pipelines. The Firecrawl skills are self-contained knowledge packs that AI coding agents like Claude Code, Antigravity, and OpenCode discover and use automatically. A single install command sets up everything — the CLI tools for live web work and the build skills for integrating Firecrawl into application code:--allinstalls the Firecrawl skills to every detected AI coding agent on the machine--browseropens the browser for Firecrawl authentication automatically
What the install gives you
The install sets up two categories of skills that cover every way an agent uses Firecrawl: CLI skills — for live web work during an agent session:| Skill | Purpose |
|---|---|
firecrawl/cli | Overall CLI command workflow |
firecrawl-search | Search the web and discover pages |
firecrawl-scrape | Extract clean content from a known URL |
firecrawl-interact | Interact with scraped pages using prompts or code |
firecrawl-crawl | Bulk-extract content from an entire site |
firecrawl-map | Discover all URLs on a domain |
firecrawl-agent | Run autonomous web data gathering with a job |
| Skill | Purpose |
|---|---|
firecrawl-build | Choose the right Firecrawl endpoint for your product |
firecrawl-build-onboarding | Auth and project setup |
firecrawl-build-scrape | Implement scraping in app code |
firecrawl-build-search | Implement search in app code |
firecrawl-build-interact | Implement page interaction in app code |
firecrawl-build-crawl | Implement crawling in app code |
firecrawl-build-map | Implement URL discovery in app code |
Choose your path
Both skill categories use the same install. The difference is what happens next:Live web tools (CLI skills)
Use this when you need web data during your current session — searching the web, scraping known URLs, interacting with scraped pages, crawling docs, mapping a site, or running an agent job.The default flow:
- Start with search when you need discovery
- Move to scrape when you have a URL
- Use interact when the scraped page needs follow-up actions
- Use map or crawl when you need many URLs or pages
- Use agent when the task is open-ended and needs autonomous discovery
App integration (Build skills)
Use this when you’re building an application, agent, or workflow that calls the Firecrawl API from code. The build skills help with picking the right endpoint, wiring up the SDK, and running a smoke test.The agent answers one key question — what should Firecrawl do in the product? — and the build skills route to
/search, /scrape, /interact, /crawl, /map, or /agent accordingly.REST API (no install needed)
If you prefer not to install anything, agents can call the Firecrawl REST API directly. Set the API key and hit the endpoints:
POST https://api.firecrawl.dev/v2/search— discover pages by queryPOST https://api.firecrawl.dev/v2/scrape— extract clean markdown from a URLPOST https://api.firecrawl.dev/v2/interact— interact with a scraped pagePOST https://api.firecrawl.dev/v2/crawl— bulk-extract an entire sitePOST https://api.firecrawl.dev/v2/map— discover URLs on a domainPOST https://api.firecrawl.dev/v2/agent— run autonomous web data gathering
Authorization: Bearer fc-YOUR_API_KEYfirecrawl.dev/agent-onboarding/SKILL.md — agents can fetch it directly for self-onboarding.
Skills + CLI
Install the CLI and skills, authenticate, and run scrape, search, crawl,
interact, map, extract, and agent commands from the terminal.
Using Firecrawl as a Tool
Firecrawl gives agents five core tools for working with the web. Each tool maps to an API endpoint and a CLI command. Agents pick the right tool based on what they need:Search — discover pages by query
Search — discover pages by query
Start here when you don’t have a URL yet. Search returns relevant web pages for a natural-language query, with optional full-page content included in the results.When to use: Research tasks, finding documentation, competitive analysis, answering questions that require up-to-date web information.
Scrape — extract content from a URL
Scrape — extract content from a URL
Use this when you already have a URL and need clean, LLM-ready content. Scrape converts any web page into markdown, HTML, or structured data — handling JavaScript rendering, anti-bot measures, and messy HTML automatically.When to use: Reading documentation, extracting article content, pulling data from a known page, converting web pages to context for LLMs.
Crawl — bulk-extract an entire site
Crawl — bulk-extract an entire site
Crawl recursively follows links from a starting URL and scrapes every page it finds. Use it when you need content from an entire site or documentation set, not just a single page.When to use: Ingesting full documentation sites, building knowledge bases, migrating content, training data collection.
Map — discover all URLs on a domain
Map — discover all URLs on a domain
Map rapidly discovers every indexed URL on a domain without scraping the content. Use it when you need to understand a site’s structure or find specific pages before scraping them.When to use: Site audits, finding specific pages on a large site, understanding site structure before a targeted crawl.
Interact — work with a scraped page
Interact — work with a scraped page
Interact lets agents continue from a scrape using prompts or code. Use it when a scraped page requires clicks, form fills, navigation, or follow-up extraction.When to use: Continuing from a scrape, navigating dynamic pages, filling forms, and extracting data after page actions.
How agents chain tools together
Most agent workflows combine multiple tools. A typical pattern:- Search to find relevant pages → get a list of URLs
- Scrape the most relevant URLs → get clean content
- Interact when the scraped page needs follow-up actions
- Agent when the task needs autonomous discovery or structured multi-page extraction
Firecrawl MCP Server
MCP is an open protocol that standardizes how applications provide context to LLMs. Among other benefits, it gives LLMs tools to act on your behalf. Our MCP server is open-source and covers our full API surface — search, scrape, interact, crawl, map, extract, and agent. Use the remote hosted URL:MCP Server
View installation instructions for Cursor, Claude Desktop, Windsurf, VS Code,
and more.
Firecrawl Docs for Agents
You can give your agent current Firecrawl docs in a context-aware way. Agents can self-onboard by pulling these resources directly — no human wiring required.Markdown docs
Every page has a markdown version. Append
.md to any docs URL, or use the page action menu to copy the page as markdown.Full llms.txt
Give your agent all of our docs in a single file.A shorter index is also available at
https://docs.firecrawl.dev/llms.txt.MCP docs server
For a structured approach using MCP tools, connect the Firecrawl MCP server in any MCP client (Cursor, Claude Code, Claude Desktop, Windsurf). See the MCP Server page for install commands.
Quick Start Guides
Drop-in quickstarts for the stacks agents build on most often. Point your agent at any of these to scaffold a working Firecrawl integration end-to-end. Prefer to let Cursor drive? One-click install the Firecrawl MCP server and start prompting in Cursor:
Node.js
Server-side JavaScript and TypeScript with the Firecrawl Node SDK.
Next.js
Scrape, search, and crawl from Next.js route handlers and server actions.
Python
Use Firecrawl from scripts, notebooks, and backend services.
FastAPI
Build async Python APIs that search, scrape, and extract.
Cloudflare Workers
Run Firecrawl at the edge with Workers.
Vercel Functions
Call Firecrawl from Vercel serverless functions.
AWS Lambda
Invoke Firecrawl from Lambda handlers.
Supabase Edge Functions
Use Firecrawl inside Supabase Deno runtime.
Go
Idiomatic Go SDK for search, scrape, and crawl.
Rust
Typed Rust SDK for Firecrawl.
Laravel
Add Firecrawl to Laravel apps via the PHP SDK.
Rails
Drop Firecrawl into Ruby on Rails.
Agent Harnesses
Firecrawl works with the runtimes and frameworks agents actually live inside — coding agents, agent SDKs, and model aggregators. Most coding harnesses can auto-discover the Firecrawl skills vianpx -y firecrawl-cli@latest init --all --browser; the rest call Firecrawl as a tool over MCP or the REST API.
Claude Code
Anthropic’s CLI — set up Firecrawl MCP in Claude Code.
Cursor
IDE agent — one-click install Firecrawl MCP in Cursor.
OpenCode
Wire Firecrawl MCP into OpenCode.
Codex CLI
Wire Firecrawl MCP into OpenAI Codex CLI.
OpenRouter
Pair any OpenRouter model with Firecrawl web tools.
Amp
Wire Firecrawl MCP into Sourcegraph Amp.
Windsurf
Agentic IDE — set up Firecrawl MCP in Windsurf.
Antigravity
Add Firecrawl MCP to Google’s agentic IDE.
Gemini CLI
Wire Firecrawl MCP into Google Gemini CLI.
Nous Research
Use Firecrawl as a tool with Hermes models.
AutoGen
Firecrawl tools inside Microsoft AutoGen multi-agent teams.

