iterate
MANDATORY PREPARATION
Invoke /agent-workflow — it contains workflow principles, anti-patterns, and the Context Gathering Protocol. Follow the protocol before proceeding — if no workflow context exists yet, you MUST run /teach-maestro first.
Consult the feedback-loops reference in the agent-workflow skill for evaluation patterns and self-correction strategies.
Set up feedback loops that make workflows self-correcting and continuously improving. Iteration transforms one-shot gambles into convergent, reliable systems.
Feedback Loop Design
Step 1: Define Quality Criteria
What does "good output" look like? Score dimensions:
| Dimension | Weight | Threshold | Measurement |
|---|---|---|---|
| Accuracy | 0.4 | ≥ 0.8 | Factual correctness check |
More from sharpdeveye/maestro
agent-workflow
Use when any Maestro command is invoked — provides foundational workflow design principles across prompt engineering, context management, tool orchestration, agent architecture, feedback loops, knowledge systems, and guardrails.
147evaluate
Use when the user wants a quality review, interaction audit, or to test the workflow against realistic scenarios.
145diagnose
Use when the user wants to find problems, audit workflow quality, or get a comprehensive health check on their AI workflow.
145calibrate
Use when workflow components are inconsistent, naming conventions vary, or a new team member's work needs alignment to project standards.
141teach-maestro
Use when starting a new project with Maestro or when no .maestro.md context file exists yet. Run once per project.
139fortify
Use when the workflow lacks error handling, has been failing in production, or needs retry logic, fallback strategies, and circuit breakers.
139