improving-skills
Improving Skills
Improve existing agent skills through gathering user feedback and technical analysis.
Workflow
Step 1: Identify Target Skill
Ask user for the skill path if not provided (e.g., .claude/skills/skill-name/).
Read SKILL.md and understand current structure.
Step 2: Gather Feedback (Required)
Ask the essential question:
"What problems or improvements do you want for this skill?"
Based on the response, ask follow-up questions as needed:
More from taisukeoe/agentic-ai-skills-creator
creating-effective-skills
Creating high-quality agent skills following Claude's official best practices. Use when designing, implementing, or improving agent skills, including naming conventions, progressive disclosure patterns, description writing, and appropriate freedom levels. Helps ensure skills are concise, well-structured, and optimized for context efficiency.
63reviewing-skills
Review skill files for best practices compliance (naming, description, structure, size). Use when checking SKILL.md quality or getting feedback before publishing. Static analysis only - does NOT execute the skill.
24running-skills-edd-cycle
Guides evaluation-driven development (EDD) process for agent skills. Use when setting up skill testing workflows, creating skill evaluation scenarios, or establishing Claude A/B feedback loops for skill validation. Provides development methodology, not content guidance.
18setting-up-devcontainers
Generate devcontainer configurations for Claude Code development environments. Use when setting up development containers with Claude Code and optional Codex CLI. Automatically detects marketplace.json for plugin marketplace configurations.
16reviewing-plugin-marketplace
Review Claude Code plugin marketplace configurations against official best practices. Use when analyzing marketplace.json and plugin.json files for structural issues, common errors, path validation, and consistency with Anthropic's official format. Detects repository URL mismatches, incorrect source paths, and missing required fields.
14evaluating-skills-with-models
Evaluate skills by executing them across sonnet, opus, and haiku models using sub-agents. Use when testing if a skill works correctly, comparing model performance, or finding the cheapest compatible model. Returns numeric scores (0-100) to differentiate model capabilities.
13