plan-self-review
Plan Self-Review
Use this skill immediately after creating or updating a plan. Catching logical gaps, ambiguous steps, or missing dependencies early prevents cascading failures during execution. Self-reviewing your plan helps you approach the problem methodically and reduces the likelihood of needing major revisions later.
Review Steps
- Score Plan (100pt): Evaluate the plan based on the following criteria:
- Clarity (25): Are the steps clear and easy to follow? Do they specify which files and tools will be used?
- Comprehensiveness (25): Does the plan cover all necessary aspects of the task? Are there any unaddressed edge cases?
- Feasibility (25): Are the steps achievable with the available tools and context? Do you have enough information to execute them?
- Consistency (25): Are there any logical contradictions or missing elements? Do the steps follow a logical sequence?
- List Deficiencies: Create a prioritized task list (
- [ ]) of any gaps, omissions, or ambiguities found in the plan. - Improve: Edit the plan to resolve the identified deficiencies. Update the plan using the appropriate tool.
- Final Check: Perform a final check to confirm no logical contradictions or missing elements remain.
Output Format
Present your review using the following structure:
More from hrdtbs/agent-skills
create-pull-request
Create a GitHub pull request safely and reliably using project conventions. Make sure to use this skill whenever the user asks to create a PR, submit changes for review, open a pull request, or mentions "PR", "プルリク", or "pull request". It handles commit verification, branch validation, and PR creation using the gh CLI.
41commit
Expert-level commit creation and formatting following Conventional Commits. Make sure to use this skill whenever you need to create a commit message, save changes to git, structure a logical commit history, or when the user mentions 'commit', 'git commit', 'コミット', '変更をコミット', or asks you to push their code.
40mcp-builder
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
4prompt-evaluator
Evaluate and score user-written LLM prompts on a 100-point scale across 5 axes (Clarity, Structure, Information Content, Specificity, Context), providing specific improvement suggestions and a revised prompt. Make sure to use this skill whenever the user asks to evaluate, review, score, or improve a prompt, or when they say things like 'このプロンプトどう?', 'プロンプトを評価して', 'rate my prompt', 'review this prompt', or 'is this prompt good enough?'. This skill focuses on scoring existing prompts, not writing new ones from scratch.
4skill-judge
Evaluate Agent Skill design quality against official specifications and best practices. Use when reviewing, auditing, or improving SKILL.md files and skill packages. Provides multi-dimensional scoring and actionable improvement suggestions.
4skill-creator
Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
4