firecrawl-crawl

Installation
Summary

Bulk extract content from entire websites or site sections with depth and path filtering.

  • Crawls pages following links up to configurable depth limits and page counts, with path inclusion/exclusion filters to scope extraction
  • Supports async job polling or synchronous waiting with progress display via --wait and --progress flags
  • Offers concurrency control, request delays, and JSON output formatting for integration into agent workflows
  • Part of a four-step escalation pattern: search → scrape → map → crawl, used when single-page extraction is insufficient
SKILL.md

firecrawl crawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to extract an entire site section
  • Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact

Quick start

# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
Related skills
Installs
37.9K
Repository
firecrawl/cli
GitHub Stars
375
First Seen
Mar 10, 2026