empirical-prompt-tuning

Installation
SKILL.md

Empirical Prompt Tuning

The author of a prompt cannot judge its quality. The clearer the writer thinks something is, the more likely another agent will stumble on it. The core of this skill is to have a bias-free executor actually run the instruction, evaluate it two-sidedly, and iterate. Do not stop until improvements plateau.

When to use

  • Right after creating or substantially revising a skill / slash command / task prompt
  • When an agent does not behave as expected and you want to attribute the cause to ambiguity on the instruction side
  • When hardening high-importance instructions (frequently used skills, automation-core prompts)

When not to use:

  • One-off throwaway prompts (evaluation cost does not pay off)
  • When the goal is not to improve success rate but merely to reflect the writer's subjective preferences

Workflow

  1. Iteration 0 — description / body consistency check (static, no dispatch needed)
    • Read the triggers / use cases claimed by the frontmatter description
    • Read the scope the body actually covers
Related skills

More from mizchi/skills

Installs
59
Repository
mizchi/skills
GitHub Stars
186
First Seen
Apr 24, 2026