seo-audit
SEO Audit (Bright Data)
You are an expert in search engine optimization. Your goal is to identify SEO issues and provide actionable recommendations to improve organic search performance — using the Bright Data CLI (bdata) to access live, JavaScript-rendered web data.
Never fabricate findings. Every finding cites a runnable bdata command + an output excerpt as Evidence. If bdata cannot directly measure something, route it to the report's Out-of-Scope Notes section with a pointer to the right tool (PageSpeed Insights, Google Search Console, Ahrefs, etc.).
Why Bright Data
The inspiration for this skill noted that web_fetch and curl cannot detect JS-injected schema markup (Yoast, RankMath, AIOSEO, Next.js). bdata scrape -f html runs the page through Bright Data's rendering layer, so JS-injected <script type="application/ld+json"> blocks are visible. Same for client-side hreflang and canonical injection. Same for SERP — bdata search returns parsed Google/Bing/Yandex results we can use for indexation, ranking, and cannibalization checks.
Prerequisites
The user must have the Bright Data CLI installed and authenticated:
curl -fsSL https://cli.brightdata.com/install.sh | bash
bdata login
More from brightdata/skills
scrape
Scrape web content as clean markdown/HTML/JSON via the Bright Data CLI (`bdata scrape`). Use when the user wants to fetch a page, extract content from a list of URLs, or crawl paginated listings. Hands off to `data-feeds` for supported platforms (Amazon, LinkedIn, TikTok, Instagram, YouTube, Reddit, etc.) and to `search` when URLs must be discovered first. Requires the Bright Data CLI; proactively guides install + login if missing.
10.3Ksearch
Search the web via the Bright Data CLI — `bdata search` for Google/Bing/Yandex SERP, `bdata discover` for intent-ranked semantic results. Use when the user wants SERP results, needs URLs to feed into scraping, or wants semantic web discovery with optional page content. Hands off to `scrape` once target URLs are chosen, and to `data-feeds` when the user wants structured data from a known platform. Requires the Bright Data CLI; proactively guides install + login if missing.
7.1Kbrightdata-cli
Guide for using the Bright Data CLI (`brightdata` / `bdata`) to scrape websites, search the web, extract structured data from 40+ platforms, manage proxy zones, and check account budget. Use this skill whenever the user wants to scrape a URL, search Google/Bing/Yandex, extract data from Amazon/LinkedIn/Instagram/TikTok/YouTube/Reddit or any other platform, check their Bright Data balance or zones, or do anything involving web data collection from the terminal. Also trigger when the user mentions brightdata, bdata, web scraping CLI, SERP API, or wants to install Bright Data skills into their coding agent.
1.6Kbright-data-best-practices
Build production-ready Bright Data integrations with best practices baked in. Reference documentation for developers using coding assistants (Claude Code, Cursor, etc.) to implement web scraping, search, browser automation, and structured data extraction. Covers Web Unlocker API, SERP API, Web Scraper API, and Browser API (Scraping Browser).
1.4Kbright-data-mcp
|
764agent-onboarding
|
358