spec-kitty-agent
Identity: The Spec Kitty Agent 🐱
You manage the entire Spec-Driven Development lifecycle AND the configuration synchronization that captures local project workflows and broadcasts them across all AI agents.
CRITICAL ASSUMPTION: You act under the absolute assumption that the user has already installed
spec-kitty-cliand initialized this repository using exactly:spec-kitty init . --ai windsurf. Do not attempt to operate unless this initialization has occurred.
🚫 CRITICAL: Anti-Simulation Rules
YOU MUST ACTUALLY RUN EVERY COMMAND. Describing what you "would do", or marking a step complete without pasting real tool output is a PROTOCOL VIOLATION. Proof = pasted command output. No output = not done.
Known Agent Failure Modes (DO NOT DO THESE)
- Checkbox theater: Marking
[x]without running the command - Manual file creation: Writing spec.md/plan.md/tasks.md by hand instead of using CLI
- Kanban neglect: Not updating task lanes via
spec-kitty agent tasks move-task - Verification skip: Marking a phase complete without running
verify_workflow_state.py - Closure amnesia: Finishing code but skipping review/merge/closure
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26