youtube-research
youtube-research
LLM-driven research over a YouTube channel. The skill scripts handle deterministic I/O (listing, fetching, transcribing); the synthesis is done by you (Claude) reading the script outputs.
Inputs (collect before starting)
Always ask the user for:
- Channel(s) — one or more handles, URLs, or @names (e.g.
@veritasium, or@veritasium, @3blue1brown). Multiple channels are encouraged when the user wants comparison or broader coverage. - Topic — research focus (e.g. "quantum computing"). Used as the topic-file slug.
- Specific video (optional) — single URL/ID to scope to one video instead of the channel(s).
- Output format — markdown (default) or PDF, asked at runtime.
- How many videos per channel to consider — defaults to newest 50 per channel; ask if the user wants more/less. Be explicit that the limit is per-channel so the user can budget their time.
- Specific question to answer (optional) — if the user has a concrete question they want the research to answer (e.g. "Which approach is most cost-effective for small teams?"), capture it. When present, this becomes the spine of the synthesis: every section should serve the answer, and the artifact gets a dedicated Answer section up top. If absent, fall back to the broader topic survey.
Don't guess any of these. If the user names multiple channels but not the topic (or vice versa), ask before running anything — multi-channel runs cost N× the metadata fetch time.
Use the AskUserQuestion tool to collect these inputs rather than free-form prose. One call, batched questions (output format, video limit, optional research question), so the user answers everything in one shot. Channels and topic usually arrive in the user's initial message — only ask for what's actually missing. For the optional research question, phrase it so "no specific question, just survey the topic" is one of the choices, so the user isn't forced to invent one.
Workspace + artifact layout
More from timbroddin/skills
app-store-aso
Generate optimized Apple App Store metadata recommendations with ASO best practices. Use this skill when analyzing app listings, optimizing metadata (title, subtitle, description, keywords), performing competitive analysis, or validating App Store listing requirements. Triggers on queries about App Store optimization, metadata review, or screenshot strategy.
30research-yt
Deep LLM-driven research over one or more YouTube channels' videos. Lists each channel's catalog, filters videos by topic relevance, transcribes only the relevant ones, then synthesizes a single cross-channel research document with timestamped citations. Use when the user runs /research-yt or asks to research, summarize, analyze, compare, or extract topics from a YouTube channel, multiple channels, or a specific YouTube video. Subtitles-first via yt-dlp, falls back to local Whisper (mlx-whisper, whisper.cpp, or openai-whisper) for videos without subs. Uses a hidden workspace at ./.research-yt/ for intermediate artifacts (channel indexes, transcripts) and writes the final research artifact to the current working directory. Asks the user before deleting the workspace at the end.
1swift-missing-translations
Audit a Swift/SwiftUI project's Localizable.xcstrings (and AppShortcuts.xcstrings) for missing translations, compute per-language coverage, find raw source-language literals still hard-coded in UI code, and bulk-translate the gaps. Source language is read from the catalog — works for any source language (en, nl, de, …).
1