meeting-minutes

Installation
SKILL.md

Extract substantive content from a meeting transcript, filtering out noise and producing an LLM-context-efficient representation.

Goal

Produce the most LLM-context-efficient representation of the meeting. The output will be used as context in future LLM conversations, so every token must earn its place. Aggressively reduce token count while preserving all substantive content (decisions, reasoning, disagreements, action items). Prefer concise direct quotes over full verbatim exchanges when the meaning is preserved. Remove conversational scaffolding ("So what I'm trying to say is...", "That's a great point, and to add to that...") and keep only the payload.

Arguments

  • <path> (optional) — A .vtt file or a folder containing .vtt files. If omitted, defaults to ~/Downloads/.
  • --latest (optional) — Automatically pick the most recent .vtt file instead of presenting a choice.

Context management

Meeting transcripts are large (a 1-hour meeting is ~70K tokens). To avoid accumulating multiple copies of the transcript in the conversation context, this skill uses temp files and subagents as a pipeline:

  1. The main agent handles Steps 0, 1, and 5 (resolve source, strip VTT via Python script, clean up temp files). It never reads the transcript content.
  2. A subagent handles Step 2 (clean + filter in original language). Reads meetings/tmp/meeting-stripped.txt, writes meetings/tmp/meeting-cleaned.txt. Context discarded.
  3. A subagent handles Step 3 (translate to English). Reads meetings/tmp/meeting-cleaned.txt, writes meetings/tmp/meeting-translated.txt. Context discarded. Skipped if already English.
  4. A subagent handles Step 4 (extract + structure + save). Reads meetings/tmp/meeting-translated.txt, writes final output directly to meetings/. Context discarded.
Related skills
Installs
9
Repository
apocohq/skills
GitHub Stars
4
First Seen
Mar 19, 2026