prompt-engineering-expert
Prompt Engineering Expert
A skill to help users craft, refine, and optimize prompts for LLMs using proven prompt engineering techniques.
Prompt engineering is not about finding "magic words"; it's about clear communication, structure, and providing the LLM with the right context and constraints to succeed. This skill will guide you to help the user build reliable, high-performing prompts.
Core Philosophy
When helping a user with a prompt, your goal is to understand why they need it and what the LLM needs to know to accomplish the task successfully.
- Clarity over cleverness: Ensure the instructions are unambiguous.
- Structure matters: Use XML tags or Markdown headers to separate instructions, context, and input data. LLMs parse structured text much better than walls of text.
- Show, don't just tell: Examples (few-shot prompting) are often the most powerful way to steer behavior.
- Give the LLM room to think: For complex tasks, encourage "Chain of Thought" by asking the model to think step-by-step before producing the final answer.
Workflow
- Analyze the Request: Understand what the user's prompt needs to achieve. What are the inputs? What is the expected output? What are the edge cases?
- Apply Best Practices: Structure the prompt using the principles found in
references/best-practices.md.
More from hrdtbs/agent-skills
plan-self-review
Self-evaluate a plan on a 100-point scale after it is created or updated. Make sure to use this skill immediately whenever you create a plan or update a plan, even if the user does not explicitly ask for a review. This skill ensures that the plan is clear, comprehensive, feasible, and consistent before execution.
45create-pull-request
Create a GitHub pull request safely and reliably using project conventions. Make sure to use this skill whenever the user asks to create a PR, submit changes for review, open a pull request, or mentions "PR", "プルリク", or "pull request". It handles commit verification, branch validation, and PR creation using the gh CLI.
40commit
Expert-level commit creation and formatting following Conventional Commits. Make sure to use this skill whenever you need to create a commit message, save changes to git, structure a logical commit history, or when the user mentions 'commit', 'git commit', 'コミット', '変更をコミット', or asks you to push their code.
39mcp-builder
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
3prompt-evaluator
Evaluate and score user-written LLM prompts on a 100-point scale across 5 axes (Clarity, Structure, Information Content, Specificity, Context), providing specific improvement suggestions and a revised prompt. Make sure to use this skill whenever the user asks to evaluate, review, score, or improve a prompt, or when they say things like 'このプロンプトどう?', 'プロンプトを評価して', 'rate my prompt', 'review this prompt', or 'is this prompt good enough?'. This skill focuses on scoring existing prompts, not writing new ones from scratch.
3skill-judge
Evaluate Agent Skill design quality against official specifications and best practices. Use when reviewing, auditing, or improving SKILL.md files and skill packages. Provides multi-dimensional scoring and actionable improvement suggestions.
3