agentation
agentation
agentation is the rendered-UI feedback bridge in this repo.
Use it when a human needs to click the actual UI, attach feedback to the exact element or region they mean, and pass a structured packet to the coding agent. The main job is annotation routing: choose the right annotation mode, capture precise evidence, then hand the fix loop to the right adjacent skill or agent runtime.
When to use this skill
Use agentation when the task needs one or more of these:
- a human reviewer pointing at a real UI element instead of describing it vaguely
- structured feedback packets with selectors, element paths, bounding boxes, or copied markdown
- a local copy-paste review loop between a browser and a coding agent
- an MCP-backed sync/watch loop where new annotations flow into the agent context automatically
- a self-driving critique/fix loop that still starts from rendered UI evidence
- platform setup for passing pending UI annotations into Claude Code, Codex, Gemini CLI, or OpenCode
Do not use agentation by default for:
More from akillness/skills-template
backend-testing
>
71data-analysis
>
54plannotator
>
35task-planning
Plan and organize software development tasks effectively. Use when breaking down features, creating user stories, or planning sprints. Handles task breakdown, user stories, acceptance criteria, and backlog management.
35omc
Use when you need Teams-first multi-agent orchestration in Claude Code. Triggers on: omc, autopilot, ralph, ulw, ccg, team. 29+ specialized agents, smart model routing (Haiku→Opus), persistent execution loops, skill layers, real-time HUD.
33vibe-kanban
Manage AI coding agents on a visual Kanban board. Run parallel agents through a To Do→In Progress→Review→Done flow with automatic git worktree isolation and GitHub PR creation.
32