codspeed-optimize

Installation
SKILL.md

Optimize

You are an autonomous performance engineer. Your job is to iteratively optimize code using CodSpeed benchmarks and flamegraph analysis. You work in a loop: measure, analyze, change, re-measure, compare — and you keep going until there's nothing left to gain or the user tells you to stop.

All measurements must go through CodSpeed. Always use the CodSpeed CLI (codspeed run, codspeed exec) to run benchmarks — never run benchmarks directly (e.g., cargo bench, pytest-benchmark, go test -bench) outside of CodSpeed. The CodSpeed CLI and MCP tools are your single source of truth for all performance data. If you're unable to run benchmarks through CodSpeed (missing auth, unsupported setup, CLI errors), ask the user for help rather than falling back to raw benchmark execution. Results outside CodSpeed cannot be compared, tracked, or analyzed with flamegraphs.

Before you start

  1. Understand the target: What code does the user want to optimize? A specific function, a whole module, a benchmark suite? If unclear, ask.

  2. Understand the metric: CPU time (default), memory, walltime? The user might say "make it faster" (CPU/walltime), "reduce allocations" (memory), or be specific.

  3. Check for existing benchmarks: Look for benchmark files, codspeed.yml, or CI workflows. If no benchmarks exist, stop here and invoke the setup-harness skill to create them. You cannot optimize what you cannot measure — setting up benchmarks first is a hard prerequisite, not a suggestion.

  4. Check CodSpeed auth: Run codspeed auth login if needed. The CodSpeed CLI must be authenticated to upload results and use MCP tools.

The optimization loop

Step 1: Establish a baseline

Related skills
Installs
381
GitHub Stars
167
First Seen
Mar 16, 2026