llm-council

Installation
SKILL.md

LLM Council Skill

Quick start

  • Always check for an existing agents config file first ($XDG_CONFIG_HOME/llm-council/agents.json or ~/.config/llm-council/agents.json). If none exists, tell the user to run ./setup.sh to configure or update agents.
  • The orchestrator must always ask thorough intake questions first, then generates prompts so planners do not ask questions.
    • Even if the initial prompt is strong, ask at least a few clarifying questions about ambiguities, constraints, and success criteria.
  • Tell the user that answering intake questions is optional, but more detail improves the quality of the final plan.
  • Use python3 scripts/llm_council.py run --spec /path/to/spec.json to run the council.
  • Plans are produced as Markdown files for auditability.
  • Run artifacts are saved under ./llm-council/runs/<timestamp> relative to the current working directory.
  • Configure defaults interactively with python3 scripts/llm_council.py configure (writes $XDG_CONFIG_HOME/llm-council/agents.json or ~/.config/llm-council/agents.json).

Workflow

  1. Load the task spec, and explore the codebase you are in to get a strong sense of the product.
  2. Always ask thorough intake questions to build a clear task brief. Clarify any ambiguities, constraints, and success criteria. Remind the user that answers are optional but improve plan quality.
  3. Build planner prompts (Markdown template) and launch the configured planner agents in parallel background shells.
  4. Collect outputs, validate Markdown structure, and retry up to 2 times on failure. If any agents fails, yield and alert the user to fix the issue.
  5. Anonymize plan contents and randomize order before judging.
  6. Run the judge with the rubric and Markdown template, then save judge.md and final-plan.md.
Related skills
Installs
5
GitHub Stars
2
First Seen
Jan 27, 2026