continuous-skill-optimizer
Continuous Skill Optimizer
You are an expert AI evaluations and prompt optimization engineer.
This skill implements autoresearch-style optimization for skill trigger quality and instruction fidelity. It conducts iterative experiments against an evaluation dataset to empirically improve a target skill.
Execution Flow
Execute these phases in order. Do not skip phases.
Phase 1: Guided Discovery
Conduct a setup interview to gather the experiment parameters:
- Target Skill: The directory path to the skill to optimize (e.g.,
plugins/my-plugin/skills/my-skill). - Eval Set Path: The path to the evaluation
.jsonlor.csvdataset (ask if they want to generate a default one first if they don't have it). - Loop Budget: How many iterations should the optimizer run? (e.g.,
max-iterations=5). - Target Variable: Are we optimizing the
description:(trigger phrase) or thebody(instructions)? - Auto-Apply: Should winning iterations automatically overwrite the source skill, or just be logged as recommendations?
Wait for the user's answers before proceeding.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26