llm-council

Installation
Summary

Multi-agent planning council that orchestrates independent implementation plans, anonymizes them, then merges into one final plan.

  • Supports configurable planner agents (Codex, Claude, Gemini, OpenCode, or custom CLI commands) running in parallel, with optional judge override
  • Conducts structured intake questioning before plan generation to clarify ambiguities, constraints, and success criteria
  • Produces validated Markdown outputs with automatic retry logic (up to 2 attempts) and failure handling across all agents
  • Anonymizes and randomizes planner outputs before judging to reduce bias, then saves judge feedback and final merged plan to timestamped run directories
SKILL.md

LLM Council Skill

Quick start

  • Always check for an existing agents config file first ($XDG_CONFIG_HOME/llm-council/agents.json or ~/.config/llm-council/agents.json). If none exists, tell the user to run ./setup.sh to configure or update agents.
  • The orchestrator must always ask thorough intake questions first, then generates prompts so planners do not ask questions.
    • Even if the initial prompt is strong, ask at least a few clarifying questions about ambiguities, constraints, and success criteria.
  • Tell the user that answering intake questions is optional, but more detail improves the quality of the final plan.
  • Use python3 scripts/llm_council.py run --spec /path/to/spec.json to run the council.
  • Plans are produced as Markdown files for auditability.
  • Run artifacts are saved under ./llm-council/runs/<timestamp> relative to the current working directory.
  • Configure defaults interactively with python3 scripts/llm_council.py configure (writes $XDG_CONFIG_HOME/llm-council/agents.json or ~/.config/llm-council/agents.json).
Related skills
Installs
1.2K
GitHub Stars
929
First Seen
Jan 23, 2026